OpenAI’s New Search Engine Rumors and More
It is Tuesday again, so as promised here are some titbits about AI.
- This week could be a game-changer in the AI world
Guess who said this – “I find the current model boring. The question shouldn’t be about building a ‘better’ Google Search. It’s about fundamentally improving information discovery, utilization, and synthesis”
It was OpenAI CEO Sam Altman. The rumour in the AI world is that he, backed by Microsoft, may be on the verge of announcing a new Search engine, most likely on May 09, that’s two days from now!
Of course, before the Google I/O event 😊
If this happens, ARC, Perplexity and others could face a huge problem, and Google will be in a battle mode like never before.
Fasten up and be ready to enjoy the next few weeks of action!
- No introduction is needed for this … just have a dekho: https://twitter.com/MFA_Ukraine/status/1785558101908742526
And imagine what is going to come in the near future!!
A related question – Has anyone seen this 2024 movie: “Teri Baaton Mein Aisa Uljha Jiya”?
- Okay, next question: Did you know that you can convert any file to desired format in ChatGPT? Use this simple prompt: “Convert the attached file from [current file format] to [desired file format]”
[Attach file]
For example, you can convert a PDF into a Word file or an MP4 video into a GIF.
It’s easy and saves time.
- Here’s an interesting prompt to get some marketing ideas:
“Create an advertising campaign about [company, product, or service] targeting [target audience]. Include key messages and slogans and choose the best media channels for promotions.”
While experimenting with this prompt, ask for the campaign in a specific format that you like, you could also upload an older format and ask the model to replicate it with the new company’s details.
The more you experiment, the better results you will get.
- Lastly, colleagues and clients have asked me, what are “tokens” and “context window” among the various LLM models?
To state it simply, “Tokens” refer to the words that AI models can process at the *same time*. In general, the larger the “Context window”, the more a model can understand syntax and other nuances of language. And theoretically, the more it can understand, the better answers it can provide.
To better understand this, let’s compare how many ‘Harry Potter’ books can fit within each platform’s context window?
According to Eduardo Viteri, Google’s Gemini 1.5 Pro can fit nearly 10 copies of “Harry Potter and the Sorcerer’s Stone” — or about 1 million tokens — within its context window. That’s far more than competitors’ models:
Claude 2.1 comes in second place with about 200,000 tokens, or roughly 1.95 Harry Potter books.
GPT-4 Turbo comes third with 1.25 books.
Grok can take only 0.24 books.
GPT-3.5 Turbo only 0.16 books.
Of course this will keep on evolving, and maybe by next week, these numbers might not be relevant!!
That’s it for now, if you find this post interesting, please indicate it.
Thanks!
No Comments
Sorry, the comment form is closed at this time.