15

I have master slave setup of MySQL with GTID configured. I took back data backup of master and importing it to individual Test server. It is failing to import as

ERROR 1839 (HY000) at line 24: @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_MODE = ON

I tried with --set-gtid-purged=OFF and AUTO, but no luck.

JAVAC
  • 253
  • 1
  • 2
  • 6

3 Answers3

38

If you run a

SHOW MASTER STATUS\G

you'll see something like this:

mysql> show master status\G
*************************** 1. row ***************************
         File: mysql-bin.000299
         Position: 780437462
         Binlog_Do_DB:
         Binlog_Ignore_DB:
         Executed_Gtid_Set: 075d81d6-8d7f-11e3-9d88-b4b52f517ce4:1-616637650,
         e907792a-8417-11e3-a037-b4b52f51dbf8:1-25385296642
         1 row in set (0.00 sec)

Becuase when GTID is enabled all the servers got their own uuid, and there are transactions. I suppose you created the dump with mysqldump, and if you look at the beginning of that file, you'll find something similiar as this:

--
-- GTID state at the beginning of the backup 
--

 SET @@GLOBAL.GTID_PURGED='075d81d6-8d7f-11e3-9d88-b4b52f517ce4:1-616648986,
 e907792a-8417-11e3-a037-b4b52f51dbf8:1-25385296642';

This is the command which cannot be executed.

You have the following options:

  • Remove this command from the mysql dump file. Simply delete it. All the inserts will appear on slave as it's local transactions

  • If you want to prevent this happening, you can also reset master on slave

    mysql> RESET MASTER;

    This command will clean up the 'Executed_Gtid_Set' variable on slave, so you can import the dumpfile directly, and the previously mentioned set_global_gtid_purged variable takes action

  • When you create the mysqldump, you can skip the GTID setup part as adding the --set-gtid-purged=OFF parameter for mysqldump.

NOTE:

if the GTID subset differs on master between master and slave (if you want to use this in a replication setup) then the replication will not work, I'd recommend a binary dump and restore, as setting the slave's GTID exactly to the master's.

With GTID there are a lot of new problems emerge, but your replica setup will be more consistent. It is worth working with that.

Jason Law
  • 103
  • 3
banyek
  • 521
  • 4
  • 4
  • actually i am trying to migrate this whole mysql server to a new server machine, i installed mysql server on new machine and trying to import dump, this where i am getting this issue – JAVAC May 04 '15 at 15:55
  • 1
    And the server has its own GTID set. (See 'SHOW MASTER STATUS') If it will be a brand new server, no slaves using it currently, then I would remove the GTID data from the dump, and import all the data. After that I'd issue a 'RESET MASTER' which resets the GTID sequence and you can start fresh. – banyek May 04 '15 at 20:51
  • 1
    removed GTID from dump sql file worked for me – JAVAC May 04 '15 at 21:20
5

If you are like me and you don't want to re-run your dump because it was a very long operation you can just remove those lines after the fact.

find . -name '*.sql' -type f -exec perl -0 -i.bak -pe 's/SET \@\@GLOBAL\.GTID_PURGED=\x27.*?\x27;//gs' {} +

Run this in the folder with your .sql files. It will save the old veresion as .bak.

This worked for me.

Goddard
  • 181
  • 1
  • 4
  • I'm in the same situation, thanks for sharing such hint. I've .sql.gz files, Any suggestion? I'll post solution here. – Kamal Joshi Sep 24 '20 at 08:49
2

I have a huge database, so, like @Goddard, I have a backup/dump spread across several files.  I have low disk space, so I export my dump in compressed format (i.e., .sql.gz).  @Goddard's solution appeals to me, but, as I am running on low disk space, I can't afford to extract those files to .sql and then apply changes.  Instead of that I'll execute the following adaptation of @Goddard's answer to import .sql.gz files, removing the GTID query as I go.

find "$DIR" -name "*.sql.gz" | while read table
do
    echo "$table"
    zcat "$table" | perl -pe 's/SET \@\@GLOBAL\.GTID_PURGED=\x27.*?\x27;//gs' | \
                    mysql -h "$HOST" -u "$USER" -p"$PASS" -P "$PORT" "$DB"
done
Kamal Joshi
  • 121
  • 4
  • I’m not sure I 100% understand what this question is about, but it seems to me that you are extending @Goddard's answer but also contracting it.  ISTM that Goddard's answer *fixes* `*.sql` files so they can be imported, but yours only displays to the screen what the corrected version would look like.  I don’t understand how this answers the question. … … … … … … … … … … … … … … … Please do not respond in comments; [edit] your answer to make it clearer and more complete. – Scott - Слава Україні Sep 25 '20 at 20:57
  • OK, @Scott, Answer is updated by what I mean. – Kamal Joshi Sep 26 '20 at 16:15
  • OK, that’s somewhat clearer. But, are you talking about multiple dump files? Or is there just one, but you don’t know its name, so you’re using `find` to find it? And, if it’s multiple files, can you just concatenate them and pipe them into `mysql` all at once, or do you need to invoke `mysql` separately for each one? And does order matter? – Scott - Слава Україні Sep 26 '20 at 19:50
  • I've multiple files and it took me almost 3 days to export so I've to loop through file names and import one by one. – Kamal Joshi Sep 28 '20 at 08:25
  • So, you type the `zcat … | perl … | mysql …` command multiple times (importing the files one by one)? Since you seem to have some understanding of `find`, you might want to try to figure out how to automate the entire operation. – Scott - Слава Україні Sep 28 '20 at 08:34
  • I've updated my answer with `find` and `while` loop, I've updated the same on my documentation - https://do-droplet-basics.netlify.app/docs/miscellaneous/ – Kamal Joshi Sep 28 '20 at 09:42
  • OK, that’s better.  (I assume that order doesn’t matter.)  I have edited your answer.  You don’t need to show the entire history in your answer, so I have deleted the original version, because, as far as I can tell, you never justified that it’s useful.  Also, you have my permission to leave out my name.  I quoted your shell variable references (e.g., `"$DIR"`, ``"$table"``, `"$HOST"`, etc.) to work with filenames with special characters.  (You should always quote your shell variable references unless you have a good reason not to, and you’re sure you know what you’re doing.)  … (Cont’d) – Scott - Слава Україні Sep 29 '20 at 04:20
  • (Cont’d) …  Even with my fixes, your solution will fail for some filenames. `find … | while read …` is a bad pattern; `find … -exec sh …` is better. See my answers to [Find and copy folders recursively but keep the structure](https://superuser.com/q/1305704/150988#1305941) and [Bash script to find folders containing a specific subfolder, and then copy to a new directory while preserving parent name](https://superuser.com/q/1467949/150988), and see also my answer to [Translating Windows script for use in Linux](https://superuser.com/q/1391047/150988#1391076). – Scott - Слава Україні Sep 29 '20 at 04:20
  • By the way, in American English, we generally don’t contract “I have” to “I’ve” when it is used as a primary verb (representing ownership or possession); for example, we wouldn’t say “I’ve an idea” or “I’ve a database”.  By contrast, “I’ve edited” and “I’ve deleted” (where “have” is used as an *auxiliary* verb) are OK.   See [Is it appropriate to use short form of “have” ('ve) when it means possession?](https://english.stackexchange.com/q/8/26083) and [Difference between “I've” and “I have”](https://ell.stackexchange.com/q/75527/357). – Scott - Слава Україні Sep 29 '20 at 04:20
  • 1
    Thanks for updating, I will start using it correctly. :) – Kamal Joshi Sep 29 '20 at 07:39