3

I can't start the Elasticsearch service on my Ubuntu 20.04 (Focal Fossa) installation:

 #systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
     Loaded: loaded (/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/elasticsearch.service.d
             └─override.conf
     Active: failed (Result: exit-code) since Sun 2021-10-17 21:30:19 UTC; 27min ago
       Docs: https://www.elastic.co
    Process: 2133147 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet>
   Main PID: 2133147 (code=exited, status=1/FAILURE)

Oct 17 21:30:18 stavilelk01 systemd-entrypoint[2133147]:         at org.elasticsearch.bootstrap.Elasticsearch.execute(>
Oct 17 21:30:18 stavilelk01 systemd-entrypoint[2133147]:         at org.elasticsearch.cli.EnvironmentAwareCommand.exec>
Oct 17 21:30:18 stavilelk01 systemd-entrypoint[2133147]:         at org.elasticsearch.cli.Command.mainWithoutErrorHand>
Oct 17 21:30:18 stavilelk01 systemd-entrypoint[2133147]:         at org.elasticsearch.cli.Command.main(Command.java:79)
Oct 17 21:30:18 stavilelk01 systemd-entrypoint[2133147]:         at org.elasticsearch.bootstrap.Elasticsearch.main(Ela>
Oct 17 21:30:18 stavilelk01 systemd-entrypoint[2133147]:         at org.elasticsearch.bootstrap.Elasticsearch.main(Ela>
Oct 17 21:30:18 stavilelk01 systemd-entrypoint[2133147]: For complete error details, refer to the log at /home/admin_e>
Oct 17 21:30:19 stavilelk01 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Oct 17 21:30:19 stavilelk01 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Oct 17 21:30:19 stavilelk01 systemd[1]: Failed to start Elasticsearch.

Here are the logs:

/var/log/elasticsearch# journalctl -xe
-- Automatic restarting of the unit logstash.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 17 21:58:36 stavilelk01 systemd[1]: Stopped logstash.
-- Subject: A stop job for unit logstash.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit logstash.service has finished.
--
-- The job identifier is 5920063 and the job result is done.
Oct 17 21:58:36 stavilelk01 systemd[1]: Started logstash.
-- Subject: A start job for unit logstash.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit logstash.service has finished successfully.
--
-- The job identifier is 5920063.
Oct 17 21:58:36 stavilelk01 logstash[2143095]: Using bundled JDK: /usr/share/logstash/jdk
Oct 17 21:58:36 stavilelk01 logstash[2143095]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was depreca>
Oct 17 21:58:36 stavilelk01 multipathd[828]: sda: add missing path
Oct 17 21:58:36 stavilelk01 multipathd[828]: sda: failed to get udev uid: Invalid argument
Oct 17 21:58:36 stavilelk01 multipathd[828]: sda: failed to get sysfs uid: Invalid argument
Oct 17 21:58:36 stavilelk01 multipathd[828]: sda: failed to get sgio uid: No such file or directory
Oct 17 21:58:36 stavilelk01 multipathd[828]: sdb: add missing path
Oct 17 21:58:36 stavilelk01 multipathd[828]: sdb: failed to get udev uid: Invalid argument
Oct 17 21:58:36 stavilelk01 multipathd[828]: sdb: failed to get sysfs uid: Invalid argument
Oct 17 21:58:36 stavilelk01 multipathd[828]: sdb: failed to get sgio uid: No such file or directory

How can I resolve this?

MadMike
  • 4,234
  • 8
  • 28
  • 50
Aouatif Bouka
  • 41
  • 2
  • 3

1 Answers1

3

VMWare does not provide information needed by udev to generate /dev/disk/by-id entries. A number of the VMware products have this habit. The solution is to edit the .vmx file for your virtual machine, adding this:

disk.EnableUUID = "TRUE"

After a reboot, the disks will be properly visible and multipathd won’t complain anymore. This may resolve your elastic search issue.

matigo
  • 20,403
  • 7
  • 43
  • 70
  • hello, i tried the solution that you propose but it dosen't work: ~# service elasticsearch start Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details. – Aouatif Bouka Oct 18 '21 at 09:07