Opšta veštačka inteligencija

Opšta veštačka inteligencija (AGI) je vrsta veštačke inteligencije (VI) koja može da radi jednako dobro ili bolje od ljudi na širokom spektru kognitivnih zadataka,[1] za razliku od uske veštačke inteligencije, koja je dizajnirana za specifične zadatke.[2] To je jedna od mnogobrojnih definicija jake VI.[3] It is one of various definitions of strong AI.

Stvaranje AGI je primarni cilj istraživanja veštačke inteligencije i kompanija kao što su OpenAI,[4] Dipmajnd i Antropik. Istraživanje iz 2020. identifikovalo je 72 aktivna AGI R&D projekta u 37 zemalja.[5]

Vremenski okvir za razvoj AGI ostaje predmet stalne debate među istraživačima i stručnjacima. Prema podacima iz 2023. godine, deo javnog mnjenja smatra da bi to moglo biti moguće za nekoliko godina ili decenija; drugi smatraju da bi to moglo potrajati vek ili duže; a manjina veruje da se to možda nikada neće postići.[6] Postoji debata o tačnoj definiciji AGI, kao i o tome da li su moderni veliki jezički modeli (LLM) kao što je GPT-4 rani, nekompletni oblici AGI.[7] AGI je uobičajena tema u naučnoj fantastici i studijama budućnosti.

Postoji spor oko potencijala da AGI predstavlja pretnju čovečanstvu;[8] na primer, OpenAI tvrdi da oni to tretiraju kao egzistencijalni rizik, dok drugi smatraju da je razvoj AGI previše udaljen da bi predstavljao rizik.[9][6][10]

Terminologija

Opšta veštačka inteligencija je takođe poznat kao jaka VI,[11][12] puna VI,[13] veštačka inteligencija na nivou čoveka[6] ili opšta inteligentna akcija.[14] Međutim, neki akademski izvori rezervišu termin „jaka AI“ za kompjuterske programe koji doživljavaju osete ili svest.[а] Nasuprot tome, slaba VI (ili uska VI) je u stanju da reši specifičan problem, ali joj nedostaju opšte kognitivne sposobnosti.[15][12] Neki akademski izvori koriste „slabu veštačku inteligenciju“ za sve programe koji niti doživljavaju svest niti imaju um u istom smislu kao ljudi.[а]

Povezani koncepti uključuju veštačku superinteligenciju i transformativnu veštačku inteligenciju. Veštačka superinteligencija (ASI) je hipotetički tip AGI koji je generalno mnogo inteligentniji od ljudi,[16] dok se pojam transformativne VI odnosi na VI koja ima veliki uticaj na društvo, na primer, slično poljoprivrednoj ili industrijskoj revoluciji.[17]

Napomene

  1. ^ а б See below for the origin of the term "strong AI", and see the academic definition of "strong AI" and weak AI in the article Chinese room.

Reference

  1. ^ Heaven, Will Douglas (16. 11. 2023). „Google DeepMind wants to define what counts as artificial general intelligence”. MIT Technology Review (на језику: енглески). Приступљено 01. 03. 2024. CS1 одржавање: Формат датума (веза)
  2. ^ Krishna, Sri (09. 02. 2023). „What is artificial narrow intelligence (ANI)?”. VentureBeat (на језику: енглески). Приступљено 01. 03. 2024. 
  3. ^ Krishna, Sri (09. 02. 2023). „What is artificial narrow intelligence (ANI)?”. VentureBeat (на језику: енглески). Приступљено 01. 03. 2024. 
  4. ^ „OpenAI Charter”. openai.com (на језику: енглески). Приступљено 06. 04. 2023. 
  5. ^ Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF), Global Catastrophic Risk Institute Working Paper 20, Архивирано (PDF) из оригинала 14. 11. 2021. г., Приступљено 13. 1. 2022 
  6. ^ а б в „AI timelines: What do experts in artificial intelligence expect for the future?”. Our World in Data. Приступљено 06. 04. 2023. 
  7. ^ „Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI”. Futurism. Приступљено 13. 12. 2023. 
  8. ^ Morozov, Evgeny (30. 6. 2023). „The True Threat of Artificial Intelligence”. The New York Times. Архивирано из оригинала 30. 6. 2023. г. Приступљено 30. 6. 2023. CS1 одржавање: Формат датума (веза)
  9. ^ „Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks”. ABC News (на језику: енглески). 23. 03. 2023. Приступљено 2023-04-06. 
  10. ^ „Artificial general intelligence: Are we close, and does it even make sense to try?”. MIT Technology Review (на језику: енглески). Приступљено 06. 04. 2023. 
  11. ^ Kurzweil 2005, стр. 260.
  12. ^ а б Kurzweil, Ray (5. 8. 2005a), „Long Live AI”, Forbes, Архивирано из оригинала 2005-08-14. г. CS1 одржавање: Формат датума (веза): Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."
  13. ^ „The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013”. Архивирано из оригинала 26. 2. 2014. г. Приступљено 22. 2. 2014. 
  14. ^ Newell & Simon 1976, This is the term they use for "human-level" intelligence in the physical symbol system hypothesis.
  15. ^ „The Open University on Strong and Weak AI”. Архивирано из оригинала 25. 9. 2009. г. Приступљено 8. 10. 2007. CS1 одржавање: Формат датума (веза)
  16. ^ „What is artificial superintelligence (ASI)? | Definition from TechTarget”. Enterprise AI (на језику: енглески). Приступљено 08. 10. 2023. 
  17. ^ „Artificial intelligence is transforming our world – it is on all of us to make sure that it goes well”. Our World in Data. Приступљено 08. 10. 2023. 

Literatura

  • UNESCO Science Report: the Race Against Time for Smarter Development. (PDF). Paris: UNESCO. 11. 6. 2021. ISBN 978-92-3-100450-6. Архивирано (PDF) из оригинала 18. 6. 2022. г. Приступљено 22. 9. 2021. 
  • Aleksander, Igor (1996), Impossible MindsНеопходна слободна регистрација, World Scientific Publishing Company, ISBN 978-1-86094-036-1 
  • Azevedo FA, Carvalho LR, Grinberg LT, Farfel J, et al. (април 2009), „Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain”, The Journal of Comparative Neurology, 513 (5): 532—541, PMID 19226510, S2CID 5200449, doi:10.1002/cne.21974, Архивирано из оригинала 18. 2. 2021. г., Приступљено 4. 9. 2013 CS1 одржавање: Формат датума (веза)
  • Berglas, Anthony (2008), Artificial Intelligence will Kill our Grandchildren, Архивирано из оригинала 23. 7. 2014. г., Приступљено 31. 8. 2012 CS1 одржавање: Формат датума (веза)
  • Chalmers, David (1996), The Conscious Mind, Oxford University Press. 
  • Clocksin, William (август 2003), „Artificial intelligence and the future”, Philosophical Transactions of the Royal Society A, 361 (1809): 1721—1748, Bibcode:2003RSPTA.361.1721C, PMID 12952683, S2CID 31032007, doi:10.1098/rsta.2003.1232 CS1 одржавање: Формат датума (веза).
  • Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3. 
  • Darrach, Brad (20. 11. 1970), „Meet Shakey, the First Electronic Person”, Life Magazine, стр. 58—68 .
  • Drachman, D. (2005), „Do we have brain to spare?”, Neurology, 64 (12): 2004—2005, PMID 15985565, S2CID 38482114, doi:10.1212/01.WNL.0000166914.38327.BB .
  • Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World, Michael Joseph, ISBN 978-0-7181-2401-4 
  • Gelernter, David (2010), Dream-logic, the Internet and Artificial Thought, Архивирано из оригинала 26. 7. 2010. г., Приступљено 25. 7. 2010 
  • Goertzel, Ben; Pennachin, Cassio, ур. (2006), Artificial General Intelligence (PDF), Springer, ISBN 978-3-540-23733-4, Архивирано из оригинала (PDF) 20. 3. 2013. г. 
  • Goertzel, Ben (децембар 2007), „Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil”, Artificial Intelligence, 171 (18, Special Review Issue): 1161—1173, doi:10.1016/j.artint.2007.10.011 Слободан приступ, Архивирано из оригинала 7. 1. 2016. г., Приступљено 1. 4. 2009 CS1 одржавање: Формат датума (веза).
  • Gubrud, Mark (новембар 1997), „Nanotechnology and International Security”, Fifth Foresight Conference on Molecular Nanotechnology, Архивирано из оригинала 29. 5. 2011. г., Приступљено 7. 5. 2011 CS1 одржавање: Формат датума (веза)
  • Halal, William E. „TechCast Article Series: The Automation of Thought” (PDF). Архивирано из оригинала (PDF) 6. 6. 2013. г. 
  • Holte, R. C.; Choueiry, B. Y. (2003), „Abstraction and reformulation in artificial intelligence”, Philosophical Transactions of the Royal Society B, 358 (1435): 1197—1204, PMC 1693218 Слободан приступ, PMID 12903653, doi:10.1098/rstb.2003.1317 .
  • Howe, J. (новембар 1994), Artificial Intelligence at Edinburgh University: a Perspective, Архивирано из оригинала 17. 8. 2007. г., Приступљено 30. 8. 2007 CS1 одржавање: Формат датума (веза)
  • Johnson, Mark (1987), The body in the mind, Chicago, ISBN 978-0-226-40317-5 
  • Kurzweil, Ray (2005), The Singularity is Near, Viking Press 
  • Lighthill, Professor Sir James (1973), „Artificial Intelligence: A General Survey”, Artificial Intelligence: a paper symposium, Science Research Council 
  • Luger, George; Stubblefield, William (2004), Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th изд.), The Benjamin/Cummings Publishing Company, Inc., стр. 720, ISBN 978-0-8053-4780-7 
  • McCarthy, John (октобар 2007), „From here to human-level AI”, Artificial Intelligence, 171 (18): 1174—1182, doi:10.1016/j.artint.2007.10.009 Слободан приступ CS1 одржавање: Формат датума (веза).
  • McCorduck, Pamela (2004), Machines Who Think (2nd изд.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1 
  • Moravec, Hans (1976), The Role of Raw Power in Intelligence, Архивирано из оригинала 3. 3. 2016. г., Приступљено 29. 9. 2007 
  • Moravec, Hans (1988), Mind Children, Harvard University Press 
  • Moravec, Hans (1998), „When will computer hardware match the human brain?”, Journal of Evolution and Technology, 1, Архивирано из оригинала 15. 6. 2006. г., Приступљено 23. 6. 2006 
  • Nagel (1974), „What Is it Like to Be a Bat” (PDF), Philosophical Review, 83 (4): 435—50, JSTOR 2183914, doi:10.2307/2183914, Архивирано (PDF) из оригинала 16. 10. 2011. г., Приступљено 7. 11. 2009 CS1 одржавање: Формат датума (веза).
  • Newell, Allen; Simon, H. A. (1963), „GPS: A Program that Simulates Human Thought”, Ур.: Feigenbaum, E. A.; Feldman, J., Computers and Thought, New York: McGraw-Hill 
  • Newell, Allen; Simon, H. A. (1976). „Computer Science as Empirical Inquiry: Symbols and Search”. Communications of the ACM. 19 (3): 113—126. doi:10.1145/360018.360022 Слободан приступ. 
  • Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, Morgan Kaufmann Publishers, ISBN 978-1-55860-467-4 
  • NRC (1999), „Developments in Artificial Intelligence”, Funding a Revolution: Government Support for Computing Research, National Academy Press, Архивирано из оригинала 12. 1. 2008. г., Приступљено 29. 9. 2007 
  • Omohundro, Steve (2008), The Nature of Self-Improving Artificial Intelligence, presented and distributed at the 2007 Singularity Summit, San Francisco, California 
  • Poole, David; Mackworth, Alan; Goebel, Randy (1998), Computational Intelligence: A Logical Approach, New York: Oxford University Press, Архивирано из оригинала 25. 7. 2009. г., Приступљено 6. 12. 2007 
  • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd изд.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2 
  • Sandberg, Anders; Boström, Nick (2008), Whole Brain Emulation: A Roadmap (PDF), Technical Report #2008-3, Future of Humanity Institute, Oxford University, Архивирано (PDF) из оригинала 25. 3. 2020. г., Приступљено 5. 4. 2009 
  • Searle, John (1980), „Minds, Brains and Programs” (PDF), Behavioral and Brain Sciences, 3 (3): 417—457, S2CID 55303721, doi:10.1017/S0140525X00005756, Архивирано (PDF) из оригинала 17. 3. 2019. г., Приступљено 3. 9. 2020 
  • Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row 
  • Sutherland, J. G. (1990), „Holographic Model of Memory, Learning, and Expression”, International Journal of Neural Systems, 1—3: 256—267 .
  • Turing, Alan (октобар 1950), „Computing Machinery and Intelligence”, Mind, LIX (236): 433—460, ISSN 0026-4423, doi:10.1093/mind/LIX.236.433 
  • de Vega, Manuel; Glenberg, Arthur; Graesser, Arthur, ур. (2008), Symbols and Embodiment: Debates on meaning and cognition, Oxford University Press, ISBN 978-0-19-921727-4 .
  • Wang, Pei; Goertzel, Ben (2007). „Introduction: Aspects of Artificial General Intelligence”. Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop 2006. IOS Press. стр. 1—16. ISBN 978-1-58603-758-1. Архивирано из оригинала 18. 2. 2021. г. Приступљено 13. 12. 2020. 
  • Williams, R. W.; Herrup, K. (1988), „The control of neuron number”, Annual Review of Neuroscience, 11: 423—453, PMID 3284447, doi:10.1146/annurev.ne.11.030188.002231 .
  • Yudkowsky, Eliezer (2006), „Artificial General Intelligence” (PDF), Annual Review of Psychology, Springer, 49: 585—612, ISBN 978-3-540-23733-4, PMID 9496632, doi:10.1146/annurev.psych.49.1.585, Архивирано из оригинала (PDF) 11. 4. 2009. г. .
  • Yudkowsky, Eliezer (2008), „Artificial Intelligence as a Positive and Negative Factor in Global Risk”, Global Catastrophic Risks, Bibcode:2008gcr..book..303Y, ISBN 9780198570509, doi:10.1093/oso/9780198570509.003.0021 .
  • Zucker, Jean-Daniel (јул 2003), „A grounded theory of abstraction in artificial intelligence”, Philosophical Transactions of the Royal Society B, 358 (1435): 1293—1309, PMC 1693211 Слободан приступ, PMID 12903672, doi:10.1098/rstb.2003.1308 CS1 одржавање: Формат датума (веза).
  • Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, 98  (4): (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)
  • Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific American, 319  (3): (September 2018), pp. 88–93. "AIs are like autistic savants and will remain so for the foreseeable future.... AIs lack common sense and can easily make errors that a human never would... They are also liable to take our instructions too literally, giving us precisely what we asked for instead of what we actually wanted." (p. 93.)
  • Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
  • Hanna, Alex, and Emily M. Bender, "Theoretical AI Harms Are a Distraction: Fearmongering about artificial intelligence's potential to end humanity shrouds the real harm it already causes", Scientific American, vol 330, no. 2 (February 2024), pp. 69–70. "[H]ype [about "existential risks"] surrounds many AI firms, but their technology already enables myriad harms, including... discrimination in housing, criminal justice, and health care, as well as the spread of hate speech and misinformation... Large language models extrude... fluent... coherent-seeming text but have no understanding of what the text means, let alone the ability to reason.... (p. 69.) [T]hat output... becomes a noxious... insidious pollutant of our information ecosystem.... [T]oo many... publications [about] AI come from corporate labs or... academic groups that receive... industry funding. Many of these publications are based on junk science [that] is nonreproducible... is full of hype, and uses evaluation methods that do not measure what they purport to... Meanwhile 'AI doomers' cite this junk science... to [misdirect] attention [to] the fantasy of all-powerful machines possibly going rogue and destroying humanity." (p. 70.)
  • Hughes-Castleberry, Kenna, "A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", Scientific American, 329  (4): (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose." (p. 82.)
  • Immerwahr, Daniel, "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones." (p. 59.)
  • Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, 316  (3): (March 2017), pp. 61–63. Marcus points out a so far insuperable stumbling block to artificial intelligence: an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways. Our brain is so good at comprehending language that we do not usually notice." A prominent example is the "pronoun disambiguation problem" ("PDP"): a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.
  • Marcus, Gary, "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", Scientific American, 327  (4): (October 2022), pp. 42–45.
  • Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26.
  • Roivainen, Eka, "AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, 329  (1): (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."

Spoljašnje veze

  • The AGI portal maintained by Pei Wang