More money, more papers, doom and gloom articles
OpenAI rival Anthropic gets $300 more million, DuckDuckGo can't stay away from Generative AI, new papers freak me out and New York Times makes me sad
If you have the time to read them, the weekly research papers on AI published on arxiv.org are the real tale tells of the Generative AI phenomenon.
I will occasionally share here the most interesting ones. Today I’ve read about Vallex, a cross-lingual neural codec language model for cross-lingual speech synthesis. What Vallex does is allows you to speak foreign languages with your voice.
And the demos presented here are really good, like scary good. It uses both the source language speech and the target language text as prompts for a codec large language model.
Not gonna comment on the potential abuse for this, as it’s already agreed that most of what Generative Ai brings can be used for evil, but this kind of application will do wonders for customer care and many other services that usually imply the use of more languages.
More money for Gen AI
And while researchers churn papers, Anthropic, makers of Claude (ChatGPT rival) have raised another $300 million crisp US dollars at a pre-investing valuation of $4.1 billion. They previously got $400 million from Google (and are using Google Cloud for their infrastructure).
As a side note, Anthropic also positions itself as a flower power company “dedicated to building systems that people can rely on and generating research about the opportunities and risks of AI.” We shall see about that…
And if we’re talking about Anthropic, DuckDuckGo, your favorite private search engine, has integrated some tech from them and OpenAI to launch DuckAssist, a new feature that generates natural language answers to search queries.
For now, it’s only active on results that use Wikipedia articles and it pops up above all results to summarize the answer to your query. It gets activated more often if you input your query as a question.
Why only on Wikipedia articles? Well …Generative AI is generally unstable and this is like a closed beta for them. Wikipedia is the safe choice for now. It’s a public resource with a transparent editorial process and you can easily trace exactly where its information is coming from. Also, it’s always updated by the community.
If you want to test DuckAssist you need to have the browser extension or the DuckDuckGo search apps installed.
Doom and gloom
To wrap up today’s newsletter, we have our friends from the New York Times who got scared by Artificial Intelligence once and they stayed like this. In this recent editorial piece called The False Promise of ChatGPT, they present the opinions of famous intellectual Noam Chomsky, linguist Ian Roberts, and Jeffrey Watumull, Director of Artificial Intelligence.
Now, I like the blend of philosophy and technology as much as the next AI enthusiast, but reviewing a, basically, BETA product that has a defined purpose and arguing that “the predictions of machine learning systems will always be superficial and dubious is not that revealing or interesting.” and claiming melodramatically that “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching” is absolutely pointless at this moment of extraordinary advancements in AI.
We are literally just learning what fine-tuning these LLMs means and yes, some fine-tuning looks like lobotomy (see Bing) and, yes, companies will tinker with this because no one wants bad PR, but the technology and research are here to build on this like bricks. Reinforcement Learning from Human Feedback (RLHF) can be set up so that the model will avoid making moral claims about some issues and it’s a choice.
Also, even if LLMs can’t acquire language as efficiently as children do, it doesn't mean they can't improve. There are lots of researchers working on this and the advancements of just the last six months would render such a topic for an article inadequate.
ChatGPT is a product, and the objectives it carries behind him are still unpredictable. Stop portraying ChatGPT as something more than it actually is.
The NYT article also features some examples of failures of Chatbots just to reinforce the author’s claims. They add nothing of value.