Tim Harford mentioned this in his 2016 book “Messy”.
They just wanna call it AI and make it sound like some mysterious intelligence we can’t comprehend.
What used to take weeks of highly skilled work can now be accomplished in hours.
(…) delivers stunning high-performance devices that run counter to the usual rules of thumb and human intuition (…)Eventually a.i. created circuits will power better a.i. The singularity may happen soon. This is unpredictable.
Lmao calm down AI can’t even reliably differentiate cats from dogs
This isn’t exactly new. I heard a few years ago about a situation where the ai had these wires on the chip that should not do anything as they didn’t go anywhere , but if they removed it the chip stopped working correctly.
Yeah, I’ve stumbled upon that one a while back too, probably. Was it also the one where the initial designs would refuse to work outside the room temperature 'til the ai was asked to take temps into account?
I don’t know about AI involvement but this story in general is very very old.
I remember that as well.
Edit; moved comment to correct reply.
I thought of this as well. In fact, as a bit of fun I added a switch to a rack at our lab in a similar way with the same labels. This one though does nothing, but people did push the “turbo” button on old pc boxes despite how often those buttons weren’t connected.
My turbo button was connected to an LED but that was it
That was a different technique, using simulated evolution in an FPGA.
An algorithm would create a series of random circuit designs, program the FPGA with them, then evaluate how well each one accomplished a task. It would then take the best design, create a series of random variations on it, and select the best one. Rinse and repeat until the circuit is really good at performing the task.
I think this is what I am thinking of. Kind of a predecessor of modern machine learning.
It is a form of machine learning
I remember this too, it was years and years ago (I almost want to say 2010-2015). Can’t find anything searching for it
You helped me narrow it down. I expect Adrian Thompson’s research from the 90s, referenced in this Wikipedia article is what you’re thinking of.
Perhaps you’re an AI who only hallucinated a circuit design.
Sounds like RF reflection used like a data capacitor or something.
The particular example was getting clock-like behavior without a clock. It had an incomplete circuit that used RF reflection or something very similar to simulate a clock. Of course, removing this dead-end circuit broke the design.
“We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better.”
Great, so we will eventually have black box chips running black box algorithms for corporations where every aspect of the tech is proprietary and hidden from view with zero significant oversight by actual people…
The true cyber-dystopia.
This has been going on in chess for a while as well. Computer can detect patterns that human cannot because it has a better memory and knowledge base.
Well, that’s kind of like the human brain isn’t it? You don’t really know how it does its thing but it does it.
Nope, we actually have entire fields of study that focus on the brain and cognition with thousands of experts and decades of research and experimentation to effectively understand a ton about how pur brains work and why we behave the way we do.
Plus, your brain is not created and owned entirely by trillion dollar megacorps with the primary incentive to use it to increase profitability.
We also know how “AI” works and how it creates its outputs in the same way we know the brain.
Don’t try to equate having fields of study and experts is definitive knowledge of something, that’s being fallacious.
And yet, this AI expert stated that we don’t know why the AI designed the chip in specific ways. There’s a difference between understanding the rough mechanism for something, and understanding why something happened.
Imagine hiring an engineer to design something, they hand you a finished design; they cannot explain what it is, how they actually designed it, how it works, or why they made the specific choices they did.
I never made the false equivalency you claimed I did, and you also never addressed my second criticism, which is telling.
*yet
It is possible for AI to hallucinate elements that don’t work, at least for now. This requires some level of human oversight.
So, the same as LLMs and they got lucky.
See? I want this kind of AI. Not a word dreaming algorithm that spews misinformation
I want AI that takes a foreign language movie, and augments their face and mouth so it looks like they are speaking my language, and also changes their voice (not a voice over) to be in my language.
Idk, kinda the same, but instead of misinformation we get ICs that release a cloud of smoke in a shape of a cat when presented with specific pattern of inputs (or smth equally batshit crazy)
You want AI that makes chips that run AI faster and better?
You’ve fallen into its trap!
Read the article, it’s still ‘dreaming’ and spewing garbage, it’s just that in some iterations it’s gotten lucky. “Human oversight needed” they say. The AI has no idea what it’s doing.
I wonder how well it could work to use AI in developing an algorithm to generate chip designs. My annoyance with all of this stuff is how much people say, “Look! AI invented something new! It only took a few hours and 100x the resources!”
AI is mainly the capitalist dream of a drinking bird toy keeping a nuclear reactor online and paying a layman slave wages to make sure the bird does its job (obligatory “Simpsons did it”).
Yeah I got that. But I still prefer “AI doing science under a scientist’s supervision” over “average Joe can now make a deepfake and publish it for millions to see and believe”
This is what most all ai is. Gpt models are a tiny subsect.
Subset
You are correct but I like subsect better.
I like the subtlety of it tbh.