
The takeaways from Stanford’s 386-page report on the state of AI
Writing a report on the state of AI should really feel lots like constructing on shifting sands: By the point you hit publish, the entire {industry} has modified underneath your ft. However there are nonetheless necessary developments and takeaways in Stanford’s 386-page bid to summarize this complicated and fast-moving area.
The AI Index, from the Institute for Human-Centered Synthetic Intelligence, labored with specialists from academia and personal {industry} to gather info and predictions on the matter. As a yearly effort (and by the dimensions of it, you possibly can guess they’re already laborious at work laying out the following one), this might not be the freshest tackle AI, however these periodic broad surveys are necessary to maintain one’s finger on the heartbeat of {industry}.
This 12 months’s report contains “new evaluation on basis fashions, together with their geopolitics and coaching prices, the environmental impression of AI techniques, Ok-12 AI training, and public opinion developments in AI,” plus a have a look at coverage in 100 new international locations.
Allow us to simply bullet the highest-level takeaways:
- AI growth has flipped over the past decade from academia-led to industry-led, by a big margin, and this reveals no signal of fixing.
- It’s changing into troublesome to check fashions on conventional benchmarks and a brand new paradigm could also be wanted right here.
- The power footprint of AI coaching and use is changing into appreciable, however we have now but to see the way it might add efficiencies elsewhere.
- The variety of “AI incidents and controversies” has elevated by an element of 26 since 2012, which really appears a bit low.
- AI-related abilities and job postings are rising, however not as quick as you’d assume.
- Policymakers, nonetheless, are falling over themselves attempting to jot down a definitive AI invoice, a idiot’s errand if there ever was one.
- Funding has briefly stalled, however that’s after an astronomic improve over the past decade.
- Greater than 70% of Chinese language, Saudi, and Indian respondents felt AI had extra advantages than drawbacks. People? 35%.
However the report goes into element on many subjects and subtopics and is kind of readable and nontechnical. Solely the devoted will learn all 386 pages of study, however actually, nearly any motivated physique may.
Let’s have a look at Chapter 3, Technical AI Ethics, in a bit extra element.
Bias and toxicity are laborious to cut back to metrics, however so far as we are able to outline and take a look at fashions for this stuff, it’s clear that “unfiltered” fashions are a lot, a lot simpler to guide into problematic territory. Instruction tuning, which is to say including a layer of additional prep (reminiscent of a hidden immediate) or passing the mannequin’s output via a second mediator mannequin, is efficient at enhancing this situation, but it surely’s removed from good.
The rise in “AI incidents and controversies” alluded to within the bullets is greatest illustrated by this diagram:

Picture Credit: Stanford HAI
As you possibly can see, the development is upward and these numbers got here earlier than the mainstream adoption of ChatGPT and different massive language fashions, to not point out the huge enchancment in picture turbines. You possibly can make certain that the 26x improve is simply the beginning.
Making fashions extra truthful or unbiased in a method might have surprising penalties in different metrics, as this diagram reveals:

Picture Credit: Stanford HAI
Because the report notes, “Language fashions which carry out higher on sure equity benchmarks are likely to have worse gender bias.” Why? It’s laborious to say, but it surely simply goes to point out that optimization just isn’t so simple as everybody hopes. There is no such thing as a easy resolution to enhancing these massive fashions, partly as a result of we don’t actually perceive how they work.
Truth-checking is a kind of domains that seems like a pure match for AI: Having listed a lot of the net, it could actually consider statements and return a confidence that they’re supported by truthful sources, and so forth. That is very removed from the case. AI really is especially unhealthy at evaluating factuality and the chance just isn’t a lot that they are going to be unreliable checkers, however that they may themselves grow to be highly effective sources of convincing misinformation. A variety of research and datasets have been created to check and enhance AI fact-checking, however thus far we’re nonetheless kind of the place we began.
Fortuitously, there’s a big uptick in curiosity right here, for the plain motive that if individuals really feel they will’t belief AI, the entire {industry} is about again. There’s been an amazing improve in submissions on the ACM Convention on Equity, Accountability, and Transparency, and at NeurIPS points like equity, privateness, and interpretability are getting extra consideration and stage time.
These highlights of highlights depart quite a lot of element on the desk. The HAI workforce has carried out a fantastic job of organizing the content material, nonetheless, and after perusing the high-level stuff here, you possibly can download the full paper and get deeper into any subject that piques your curiosity.
No Comments