• 0 Posts
  • 100 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle














  • Honestly I think that sort of training is largely already over. The datasets already exist (have for over a decade now), and are largely self-training at this point. Any training on new images is going to be done by looking at captions under news images, or through crawling videos with voiceovers. I don’t think this is a going concern anymore.

    And, incidentally, that kind of dataset just isn’t very valuable to AI companies. Most of the use they’re going to get is in being able to create accessible image descriptions for visually-disabled people anyway; they don’t really have a lot more value for generative diffusion models beyond the image itself, since the aforementioned image description models are so good.

    In short, I really strongly believe that this isn’t a reason to not alt-text your images.






  • You’re ignoring everything else I said because you don’t agree with one semantic point of a partial response, so here it is again.

    Most of the time, a company can’t afford to just not release a product they worked on. They talked about why it didn’t turn out the way they wanted to in the announcement stream (the laws of physics), but assuming they had already done the investment into the R&D to produce the box, they can’t just decide “never mind.” If they do it too much, they go out of business.

    EDIT: also, you said “bit by bit” in your original message. You don’t do things bit by bit if you’re not trying to be sneaky.