They say they did this by “finetuning GPT 4o.” How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.
They kind of have to now though. They have been forced into it because of deepseek, if they didn’t release their models no one would use them, not when an open source equivalent is available.
I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.
Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to “fine tune” it.
They say they did this by “finetuning GPT 4o.” How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.
https://openai.com/index/gpt-4o-fine-tuning/
They kind of have to now though. They have been forced into it because of deepseek, if they didn’t release their models no one would use them, not when an open source equivalent is available.
I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.
Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to “fine tune” it.