AI Commerce

26 Nov 2023 09:08:39

Commerce 
 
 
 
By Leif Weatherby 
 
ON FRIDAY, OpenAI, the Microsoft-funded operator of ChatGPT, fired its CEO, Sam Altman. Then, after five days of popcorn-emoji chaos, they hired him back. The sudden move, which billion-dollar investor Microsoft only learned about moments before it was released to the public, seems to have come from a fight between Altman and engineer Ilya Sutskever, who is in charge of “alignment” at the company. Sutskever’s faction, including board member Helen Toner, whose feud with Altman may have precipitated these events, is out. Larry Summers, the former treasury secretary and Harvard president who doubted that women are good at science, is in. Altman’s return means that, in a fight about profit versus safety, profit won. Or maybe safety did? OpenAI is a weird company, and their renewed charter re-emphasises their original goal: to save humanity from the very technology they are inventing. Both sides in this fight think artificial general intelligence (“AGI,” or human-level intelligence) is close. Altman said, the day before he was fired, that “four times” — one within the last few weeks — he had seen OpenAI scientists push “the veil of ignorance back and the frontier of discovery forward.” Sutskever worries about AI agents forming mega corporations with unprecedented power, leads employees in the chant “Feel the AGI! Feel the AGI!,” and reportedly burned an effigy of an “unaligned” AGI to “symbolise OpenAI’s commitment to its founding principles.” Toner hails from Georgetown University by way of University of Oxford’s Future of Humanity Institute, a leading research institute for the perpetuation of pseudoscience fiction ideology run by the philosopher Nick Bostrom. The question, in this atmosphere, is not if machines are intelligent, but instead whether to accelerate development and distribution of this potential AGI — Altman’s position — or to pump the brakes, hard: Sutskever’s apparent desire. This debate has now broken out into the open and highlighted the conflict between so-called artificial intelligence (AI) doomers and accelerationists. The doomer question is what the probability of extinction is, your assessment of “p(doom).
 
” Economist Tyler Cowen has pointed out that doomers don’t back up their belief in the AI takeover with actual bets on this outcome, but if tens of billions of dollars hang on this type of fight, it’s hard to see it as unimportant. The goal that emerges from this cocktail of science and religious belief in AGI is to “align” machine intelligence with human values, so that, if it gains sentience, it cannot harm us. “Alignment,” author Brian Christian tells us, was borrowed by computer science from 1980s “management science” discourse, where providing incentives to create a “value-aligned” corporation was all the rage. Economists have pointed out that “direct alignment” with a single institution is radically different from “social alignment,” which is what OpenAI is focused on. Sutskever’s group there calls their project “super alignment,” pumping the rhetorical stakes even higher. But this is really just vapour, and it betrays a shocking misunderstanding of the very technology these business leaders and engineers are hawking. Karl Marx said that capitalism seemed straightforward but actually harboured “metaphysical subtleties and theological niceties.” There’s nothing subtle or nice about what’s happening in AI enterprise, though, and we’re not doing a great job of countering it with critique. The events at OpenAI this week are a great example of what I think of as “metaphysics in the C- suite”: an unhinged, reality-free debate driving decisions with sky-high market caps and real, dangerous potential consequences. The alignment concept is a house of cards that immediately falls apart when its assumptions are revealed. This is because every attempt to frame alignment relies on a background conception of language or knowledge that is “value neutral,” but never makes this fully explicit. One suspects this is because value neutrality, and thus “alignment” itself, has no real definition. Whether you think the good thing is unbiased machines or fending off a machine that learns to kill us, you’re basically missing the fact that AI is already a reflection of actual human values.
 
The fact that that’s not good or neutral needs to be taken far more seriously. There is a whole industry devoted to AI safety, and much of it is not about metaphysics. It’s not that nothing is wrong. We all read daily about the many, terrifying ills of our automated systems. Curbing actual harm is important, don’t get me wrong. It’s just not clear that “alignment” can help, because it’s not clear that it’s a concept at all. The alignment debate didn’t begin with generative AI. When Google figured out how to make computers produce meaningful language, one of the first things the machine spit out was the idea that women should be homemakers. The scientists in the room at the time, Christian reports, said, “Hey, there’s something wrong here.” They were rightly horrified by this harmful idea, but they weren’t sure what to do. How could you get a computer to speak to you — something we now take for granted with the rise of ChatGPT — but also conform to values like equality? The goal of alignment is like Isaac Asimov’s famous law of robotics that prevents machines from harming humans. Bias, falsehood, deceit: these are the real harms that machines stand to do to humans today, so aligning AI seems like a pressing problem. But the truth is that AI is very much aligned with human values, we just can’t stand to admit it.(IPA/Courtesy: Jacobin)
Powered By Sangraha 9.0