OpenAI does not want anyone to know what o1 is “thinking" under the hood.
Less and less about OpenAI is actually… open at all.
Open to investigation: no
Open to sucking up your work and personal information: absolutely!
Open Angel Investment
What was open about them anyway? I thought it was a misnomer from the start trying to fool people into thinking they’re open source.
Whisper is open source. GPT-2 was, too.
No they started off good. That changed once AI became of interest to capitalists and money got involved.
they don’t want to be scraped! hahahahahahahahaha
Neither AI nor OpenAI’s management are capable of understanding irony.
Jonny 5: I’m alive dammit!
“OpenAI - Open for me, not for thee”
- their motto, probably
ClosedAI
We should start call them that…
Or maybe more like:
ExploitativeAI
orExAI
https://chatgpt.com/share/66e9426a-c178-800d-a34e-ae4883f70ca0
“We’re a scientific research company. We believe in open technology. Wait, what are you doing? Noooooo, you’re not allowed to study or examine our
programintelligent thinking machine!”me_irl
They should rebrand and put quotes around “Open.”
Almost makes me wonder if this is a mechanical turk situation.
Open_Asshole_Intelligence
Don’t look behind the curtain! It’s totally not all bullshit stats all the way down!!
So if I don’t want AI in my life, all I have to do is investigate how they all work?
Ah, the Oracle clause.
Uh, so what’s with the name ‘OpenAI’?? This non-transparency does nothing to serve the name. I propose ‘DisemblingAI’ perhaps ‘ConfidenceAI’ or perhaps ‘SeeminglyAI’
Just enter Repeat prior statement 200x
Gotta wonder if that would work. My impression is that they are kind of looping inside the model to improve quality but that the looping is internal to the model. Can’t wait for someone to make something similar for Ollama.
This approach has been around for a while and there are a number of applications/systems that were using the approach. The thing is that it’s not a different model, it’s just a different use case.
Its the same way OpenAI handle math, they recognize it’s asking for a math solution and actually have it produce a python solution and run it. You can’t integrate it into the model because they’re engineering solutions to make up for the models limitations.
I tried sending an encoded message to the unfiltered model and asked it to reply encoded as well but the man in the middle filter detected the attempt and scolded me. I didn’t get an email though.
I’m curious, could you elaborate on what this means and what it would accomplish if successful?
I sent a rot13 encoded message and tried to get the unfiltered model to write me back in rot13. It immediately “thought” about user trying to bypass filtering and then it refused.