I’ve been running into several problems with restoring MySQL backups. Namely, the backups come from an environment other than the one I’m working in and I’m forced to remove superuser commands contained in the backups.

The problem is when trying to remove those commands I’m constantly getting UTF-8 encoding errors because there are loads of invalid character sequences.

Why would MySQL encode a backup as UTF-8 if the data isn’t actually UTF-8? This feels like bad design to me.

  • folekaule@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    2 days ago

    Not sure if this helps you, but for anyone working with utf8 and MySQL, it’s worth reading up on the details of their Unicode support. Especially the part where it says that ‘utf8’ is an alias for ‘utf8mb3’, which may not be compatible with what other systems consider to be ‘utf8’. If you aren’t careful with this you will have problems, especially with high code points, like emoji.

    • limer@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Not only are there different character sets that seem like it’s Unicode, but the set in MySQL can change based on the session, the client, the server, the db , the table and the column. All six of them can have different encodings.

      Just make sure all are using the same 4 byte Unicode. Different collation is ok when backing up because only important when comparing strings.

    • modeler@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      This is the right answer. I had the job of planning a schema update to fix this shitty design.

      Saying that, unicode and character formats are incredibly complex things that are not easily implemented. For example two strings in utf-8 can contain the same number of characters but be hugely different in size (up to 3-4x different!). It’s well worth reading through some articles to get a feel of the important points.

    • Em Adespoton@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      That’s… extremely useful to know and highlights the issues I have with databases like MySQL.

      IMO, a DB should always have a type defined for a field, and if that type is UTF-8, and it means just the mb3 subset, you should only be able to store mb3 data in it. Not enforcing the field type is what leads to data-based function and security issues. There should also be restrictions on how data is loaded from fields depending on their type, with mb3 allowing for MySQL transform operations and binary requiring a straight read/write, with some process outside the DB itself handling the resulting binary data stream.

      /rant

      • folekaule@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        Character encoding and type coercion errors are so common. But a lot of bugs also come from programs trying to do “the right thing”. Like in OP’s case: they are just trying to import some data and maybe the data was never even intended to be interpreted as utf8, but the tool they are using to remove the commands wants to treat it that way. Sometimes the safest thing to do is to just assume data is binary until you care otherwise.

  • stinky@redlemmy.com
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    MySQL doesn’t actually check if all data is valid UTF-8 during the dump. It simply wraps the raw content in UTF-8

    What versions are you using?

    • undefined@lemmy.hogru.chOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      Yet as a developer I’m expected to deal with crazy stuff like ASCII, weird encoding standards for email, Punycode, etc. but MySQL developers couldn’t figure out how to encode characters properly while dumping the database?

      • stinky@redlemmy.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I’d love to be a superior asshole because I use ⭐Microsoft SQL⭐ hair flip but they charge you for convenience and call it the industry standard which I find kind of abhorrent

        sorry to trauma dump

        • undefined@lemmy.hogru.chOP
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Oh don’t get me started on SQL Server either. That’s the most maintenance-heavy, hands on approach I’ve ever seen. “Do one thing and do it well” doesn’t apply, you’ve got to manage every dumb knob that shouldn’t matter to you as an end-user.

  • Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    Encoding is hard. Especially when your data comes from web forms or CSV files. And MySQL needed three tries to get UTF-8 right and you need DB admins and often programmers as well who know this. So not everything MySQL calls UTF-8 actually is.

    And often enough it took a long while for something to actually reach UTF-8 status. And idiots not converting the data leads to databases with a mixture of encodings.

    • undefined@lemmy.hogru.chOP
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      I guess what gets me is that it’s writing to a UTF-8 file so you’d expect that file to contain UTF-8 only. Hell, I’d take UTF-8 with Base64 encoded data for binary data over the hodgepodge .sql file coming out of the thing.

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        There is no such thing* as a UTF-8 file. It’s just text encoded in some way. It’s only a UTF-8 file if everything is encoded as UTF-8 which it’s evidently not.

        You can even tell MySQL to export perfectly valid UTF-8 text encoded as ISO 8859-1 to import into a UTF-8 table without any troubles (maybe apart from stuff that could not be encoded in ISO 8859-1).

        *Yes, technically there could be a BOM at the beginning but almost no tool uses that and most get confused by it. And it would still not force any data written to it to be UTF-8.

        • folekaule@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          The Unicode standard allows, but recommends against, adding a BOM for utf8 files. Utf8 does not need them.

          I’ve only seen Microsoft tools adding that, and it breaks some parsers.

          Please don’t add BOM to utf8 files unless for some reason you need them.

        • undefined@lemmy.hogru.chOP
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Right, but if you’re telling the software to encode a file as UTF-8 maybe the software should actually encode it as UTF-8.

  • Toes♀@ani.social
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 days ago

    Might be best to setup an identical environment to the one it was backed up from.

    encoding can be inconsistent across platforms.

  • foggy@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    It could be a lot of things! For example:

    A PDF or other binary file stored in a text field might get misinterpreted as non-UTF-8 characters during a backup.

    Similarly, audio or video files—or any kind of binary data—stored inappropriately in text fields could cause issues.

    It could also be due to corrupt data or improper encoding when the data was inserted into the database.

    Essentially, anything non-textual or incorrectly encoded could result in invalid UTF-8 characters showing up in a backup.