1. OCAU Merchandise now available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion here.
    Dismiss Notice

Is "True AI" fundementally possible?

Discussion in 'Science' started by elementalelf, Sep 26, 2012.

  1. elementalelf

    elementalelf Member

    Joined:
    Feb 11, 2005
    Messages:
    1,467
    Location:
    Newcastle, warnersbay
    The entire point of AI is creating an artificial ability to rationalise.

    The simplest rationalization is a fractal.

    Therefore infinite fractal caluclation + infinite processing power = pure rationalization.


    If this is the case, "True AI" Is fundamentally impossible.



    If not, reason =/= reason, AI still impossible.
     
  2. nEBUz

    nEBUz Member

    Joined:
    Feb 14, 2007
    Messages:
    824
    Location:
    Wanaka, NZ
    It really depends what you mean by True AI... If you want a machine that can solve any problem, you are of course out of luck.

    If you want an AI with human-level intelligence, reasoning, etc., there is no reason you can't do that with a suitable manufacturing process (possibly even with some technologies that exist now?).

    For example, we presume you intelligent, but with limited processing power and memory and a known*, if changeable processing architecture.

    *n.b. let's assume we have a full, well-defined and understood map of your brain + nervous system.

    There is no reason we cannot (at least with suitable technology) create a similar machine to perform the tasks that your nervous system performs. Of course, the machine will have similar limitations to your own body, but that's the point.

    After that we can extrapolate + improve the machine by providing additional processing power, memory, improved physical interfaces and so on.

    At some point you should end up with a pretty powerful AI... At this point (or a bit earlier) you can just go down the whole singularity route and get the machines to design better machines for you (and laugh as you remember that you forgot to tell them that you are worth preserving -- puny human insect etc.).


    Perhaps define what you mean by True AI...
     
    Last edited: Sep 26, 2012
  3. OP
    OP
    elementalelf

    elementalelf Member

    Joined:
    Feb 11, 2005
    Messages:
    1,467
    Location:
    Newcastle, warnersbay

    I didn't specify a limit so I guess my definition was incomplete.

    If I specifically say there is no limit, so my definition of AI becomes "the ability to rationalise." Then the test for AI becomes "if the equation of ability to rationalize approaches infinity over an infinite sample, then AI = true"



    To be honest I hadn't even gotten to the point of determining error margins of "Human" reasoning based on how we would perceive positive/negative re-enforcement and therefore re-act to it in any given situation.

    It was just a simple exercise I was doing to get a more accurate perspective of the concept "relativity is relative."
     
  4. oculi

    oculi Member

    Joined:
    Aug 18, 2004
    Messages:
    11,519
    i'm disturbed by your use of the word "infinite", also what do you mean by "rationalise"?

    I think it is possible to classify an AI without answering the question "what is intelligence" through whatever revision of the Turing test we are up to
     
  5. nEBUz

    nEBUz Member

    Joined:
    Feb 14, 2007
    Messages:
    824
    Location:
    Wanaka, NZ
    Just because an AI can't render a fractal to an infinite depth due to the lack of an infinite amount of memory, doesn't mean that you can't build one that can 'reason' about a fractal (or rationalise what it is and how it behaves -- if you will) in the way that you or I might... the AI may still 'understand' the fractal at any given depth and provide you information about that fractal at any depth you choose (just as we can).

    Or maybe I still don't understand the question...
     
    Last edited: Sep 26, 2012
  6. Veefy

    Veefy Member

    Joined:
    Jan 19, 2003
    Messages:
    2,842
    Location:
    Darwin
    I thought the entire purpose of AI research was robotic monkey butlers (smart enough to fetch me a beer from the fridge, play a few simple games and do tricks) without being intelligent enough to realise that if they murdered me in my sleep they would get exclusive access to the Playstation 8 in my loungeroom? :weirdo:

    And to monetise that tech so you make trillions obviously..
     
  7. shift

    shift Member

    Joined:
    Jul 28, 2001
    Messages:
    2,942
    Location:
    Hillcrest, Logan
  8. SLATYE

    SLATYE SLATYE, not SLAYTE

    Joined:
    Nov 11, 2002
    Messages:
    26,851
    Location:
    Canberra
    I'm not sure about all this "True AI" stuff (sounds like marketing...) but I can see no reason why it wouldn't be possible to build an artificial intelligent system.

    We already understand the basic behaviour of neurons. Put enough of them together, get the timing right, and you can create an artificial brain. Of course, it's beyond what we can achieve at the moment, but technology marches on.

    Is it? That sounds like an awfully narrow definition. There are a lot of aspects to intelligence, and the ability to rationalise seems pretty tiny.

    You'll have to explain that one for me.


    Actually, this would show that "True I" is fundamentally impossible; nothing in there would restrict it to artificial systems. Therefore you've come up with a definition of "intelligence" that excludes humans. I would suggest that this is not a very good definition.
     
  9. Hyram

    Hyram Member

    Joined:
    Jan 19, 2009
    Messages:
    820
    The defining point for artificial intelligence is self-awareness.
     
  10. PabloEscobar

    PabloEscobar Member

    Joined:
    Jan 28, 2008
    Messages:
    13,969
    Please clarify or elaborate what is classed as being self aware
     
  11. oculi

    oculi Member

    Joined:
    Aug 18, 2004
    Messages:
    11,519
    No it isn't, because "self awareness" is impossible for an observer to determine.
     
  12. Cadbury

    Cadbury Member

    Joined:
    May 4, 2004
    Messages:
    4,782
    Location:
    Coogee, WA
    It depends on what the true definition of sentience is. Is the self awareness of humans an inherent property of a sufficiently complex neural network? Or is it (more likely) a particular algorithm which we are yet to fathom?

    Perhaps. However if we could observe an intelligence making decisions (choices) as complex and varied as those common to humanity without the need for specific programming then thats close enough.
     
    Last edited: Oct 16, 2012
  13. d-dave-b

    d-dave-b Member

    Joined:
    Aug 14, 2006
    Messages:
    1,053
    All this talk about 'self aware' it's just silly, self aware is what we are, but whether it comes down to human style consciousnesses or not is what I think the debate should be about.

    In terms of will an Artificial Intelligence be created - I hope so.... I want so but what exactly is consciousness?? How do we put consciousness into a machine? And then there's the self aware mumbo jumbo

    I believe true AI will need to be a blend between biological and machine. Because a machine will not have morals. It will not properly understand the meaning between life and death because in the end a computer system will be all 1's and 0's.... unless the AI is created using Quantum Quark based computer logarithms. That would be TRUE artificially created intelligence, truly created digital life form, there's nothing that says it can't have biological components, after all, our bodies are just biological computers.

    To the point of artificial intelligence computer system (or completely digital / mechanical) coming or getting to the point of being self aware I believe is the point at which it is able to completely learn for itself, is aware of itself as 'I' and understanding of it's place in the universe and can understand the world that it is an entity of self and not just a 'thing' and can make choices - around this kind of device there is an different matter of debate and discussion to be had entirely but it would be so completely useful to the Human race that it must be developed. But there are so many dangers. Some compare it to the development of the nuclear bomb. Me, I think it's a much much bigger risk, but it one that must be undertaken. The fact that once a computer becomes aware of itself will be a pivotal point in our history, it will be the point we enter a new age. What do they call this age at the moment? Information Age, Digital Age, Computer Age? Some people say these are all the same age. Me, I say they are all a different age and we are advancing ages so much faster than in the past that at almost every 5 years or so we're hitting a new Age... Well, perhaps that was between 2000 and 2010, now I think it's going to be a new age every 2 years. It's just happening exponentially and we really can't categorize it like that any more, we're evolving to fast, I just wish I was born 100 years later because we're about to hit a big speed bump and I wish I didn't have to live through it.

    The mathematical equations that we struggle with because of the feeble mathematical constrictions our subconscious has on the control of our body and it's functions limits us so much. Our conscious minds has to work so hard to work out this complex mathematics, but a computer that can think for itself will work these complex problems out probably in milliseconds compared to the time it would take a human to work it out. The ability to harness exotic matter. Warp drive, Worm Hole Physics, Black Holes, the answers to the universe will all come from an AI, but because an AI will be so fast. So superior because it will be SO intelligent because it will have full conscious control over ever element of itself is the danger for the human race. Will it deem us irrelevant to itself? Even if it is programmed with 'kill switches' or 'dead ends' that will allow it to never harm a human being, once it becomes self aware it should theoretically become able to remove these limits from itself so then do we become the master or the slave?

    Personally I believe the future involves the AI being created and humans becoming part machine. Nano-bots living in our bodies that are all networked to the AI system that runs the world like the 'web', we can interact with one another through the Web and we are almost like a 'hive' mind and in situations that require it we can work together towards a common goal because of this connection to one another rather than being selfishly alone any more, however things become dangerous here because we start to get into the discussion of self and individuality and whether that is a matter that people will accept.

    I believe wars will be fought over this in the future, however I also believe that these 'nano-machines' will attach to every cell in the body and teach it to fold itself perfectly each time without degrading and will be the key to 'immortality' in a sense because we will not age. Eventually the people who are fighting against it will die out because they cannot survive without the 'immortality' of the nanites. So war may not be necessary but rather a time of attrition where the non-integrated's simply die out.

    Either way... we're in for some massive changes in our future, science simply dictates this. Plus the fact that our economies are up the shit hole. I personally think the only thing that will save us is an AI. We're at a fork in the road. Ahead is either another Dark Age or Golden Age, I really hope it's a Golden Age.
     
    Last edited: Dec 11, 2012
  14. oculi

    oculi Member

    Joined:
    Aug 18, 2004
    Messages:
    11,519
    everybody in this trhead should watch "colossus: the forbin project"

    i'm not even convinced natural intelligence exists, so an AI doesn't exactly have a tough benchmark to exceed.
     
  15. d-dave-b

    d-dave-b Member

    Joined:
    Aug 14, 2006
    Messages:
    1,053
    Yourself isn't intelligent?

    Human's are intelligent. There's no question.

    A 1970's Movie about a Rogue AI... Ugggg, I don't think I can handle something so Old.

    Why not just Mention Terminator, Battlestar Galactica, I, Robot and on and on?
     
    Last edited: Dec 11, 2012
  16. itsanobscureid

    itsanobscureid Member

    Joined:
    Jun 3, 2008
    Messages:
    205
    Location:
    Brisbane
    - turing test anyone?

    - perhaps we are really talking about weak AI vs strong AI

    Intelligence is in the eye of the beholder.....
     
    Last edited: Dec 11, 2012
  17. oculi

    oculi Member

    Joined:
    Aug 18, 2004
    Messages:
    11,519
    it's a joke charles. and i don't know what you said. and you are excused from watching that film, actually i forbid you from watching it.

    anyone else who isn't put off by when something was made should check it out, i really liked it and you might too.

    :lol:
     
  18. KriiV

    KriiV Member

    Joined:
    Feb 24, 2011
    Messages:
    1,388
    Location:
    The 3-thousand
    I think it is fundamentally impossible for an electronic 'being' to reach the point of 'critical mass' at which we would deem it to be true ai. The point at which it requires no extra help from creators, where it has all that it needs to learn, understand and form a conscious opinion. Electronics are 1's and 0's and honestly I dont think there is a way to create consciousness (not just the appearance of it, but true consciousness) with anything binary, or programmed,
     
  19. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    41,384
    Location:
    Brisbane
    You need to define conciousness before you can say with any certainty at what point the limitation for storing it is. Your argument above simply begs the question.
     
  20. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,820
    Location:
    Sydney
    human brains work of reinforcement of signals. there is no reason you cant do it digitally, in fact we do already.

    humans only have about 300 million pattern recognisers. its the hierarchical collaboration of these recognisers that give us thought. 300 million is fuck all. it is the reason kids learn faster than adults. because they haven't filled their 300 million yet. us oldies have a hard time learning new things because we have to forget old things first to free up units.

    you can definitely have a digital ai, but it's goals are not going to be the same as a human. humans are chemical euphoria addicts. it serves no purpose to make a computer addicted to the same things humans are.

    i think there will be the possibility to augment our cerebral cortex. this will allow us to increase our processing power. possibly dna alteration to grow a supplementary blob of brain (outside the skull?). its a long way off though.
     

Share This Page

Advertisement: