Saturday 11 January 2014

(2a. Comment Overflow) (50+)

(2a. Comment Overflow) (50+)

24 comments:

  1. "...the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult."

    The other-minds problem. If all one can confirm is her own consciousness, she's left with a rich internal life and nothing else, as though reality beyond her own eyes was just a screen saver. But can we talk about what humans do without talking about what they feel? If the answer is yes, then behaviorism wins. We can treat humans like computers; their computations are all programmable.
    Moreover, we can make a close estimation to feeling what another feels with words. Turing explains what it is that language does for us; what we have if we bracket off feeling and look at human nature.
    How can we study human nature without addressing the impossibility of studying consciousness? I think this question is a fruit afforded to us by the conception of a machine playing the imitation game. That is, if we figure out how a digital computer, or discrete-state computer could imitate a human being in a long, on-going conversation with one of us, we will have figured out how cognition works.

    "The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject... A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles."

    Although we are not discrete machines, we arrive at our mental states after discrete steps. Instead of an instantaneous explosion of flavor, or awareness, we have measured steps and calculations to make. It is our own fault that we do not consider the calculations made, that we take for granted how much work our brain is doing to create our passive experience.


    “It may be argued that... one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system.”

    The analogy between a digital computer and a differential analyzer suggests that we can take a snap shot of a continuous process. Is that what language is? Freezing a mental state or a feeling, and sharing it? How dynamic is language really?

    Informality of Behavior
    "For we believe that it is not only true that being regulated by laws of behaviour implies being some sort of machine (though not necessarily a discrete-state machine), but that conversely being such a machine implies being regulated by such laws. However, we cannot so easily convince ourselves of the absence of complete laws of behaviour as of complete rules of conduct. The only way we know of for finding such laws is scientific observation,"

    Stephen Wolfram explained how complex, (dare I say unpredictable) behavior ought to be considered.
    Anything and everything is a computation. And any computation is programmable. The problem is, to write the the program, you need the whole computation. If you only have half of the computation, you shouldn't be surprised when the pattern all of a sudden changes halfway through. Some people decide to become religious later in life. I could have told you he would do so, if you allowed me to tell the whole story of his life.
    It is not that our lives are not programmable, it is only that in order to program life, you need the full computation, the full behavior, to the very last output. (I hope this isn’t getting trite). And I think that is what Turing is saying here. If you churn out every computation, you will have the program, but not until then.

    ReplyDelete
  2. Firstly, I am a bit confused about discrete-state machines versus non-discrete state system. I understand why Turing classifies machines as discrete-state systems, but I don't quite understand the following quote where Turing describes
    the nervous system as a non discrete-state system:
    "The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse"

    That being said -on to my comment
    "Some simple child machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment signal are unlikely to be repeated, whereas a reward signal increased the probability of repetition of the events which led up to it."
    Turning proposes a learning program in which good 'responses' would be reinforced through a system of punishment and rewards. Indeed, some programs that learn have already been created. (see: http://vimeo.com/79098420)

    There is a distinction between thinking machines and learning machines -a machine that would be able to learn good responses from bad would not necessarily understand why a response is good or bad. Furthermore, a child that is able to form complete sentences would be able to explain why a word is bad or good, even if their explanation isn't the most detailed.

    In a way, the learning machines that Turing describes seem to be constrained by the inputs and reinforcements that they are given by people.

    ReplyDelete
    Replies
    1. I think the key here is that this algorithm must be a learning algorithm, in other word, machine must has the ability to learn. Otherwise, the only algorithms to simulate human cognition is as bad as listing the possible rules for every input and output. Not only is that not optimal, it would fall for the Searle's CRA: that computers are not actually cognizing but just "following the rules". To mitigate this problem, one must come up with an algorithm that makes machine "learn"(likely this algorithm is shorter than the other one).

      Artificial Neuron Network is probably the answer to machine learning. Trying to imitate brain's neural network. It allows machine to learn by re-enforcing the correct output. This all seems very nice except that machine learning can only achieve a certain level. In other words, learning stops progressing after reached a certain level. Furthermore, machine never seems to learn nearly as well as human. Personally, I think that this is because our limited understanding of human brain, after all, it's the research on neuron science that makes neural network possible. As we progress with deeper understanding of our brain, we could build machines that utilize better learning algorithms that would probably behave nearly as good as human.

      Delete
  3. After outlining the central components of a Turing machine (storage, executive unit, and control), Turing addresses an objection to the notion that a machine might be able to pass the Turing test. He writes, “the criticism that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity.” From this, it seems that Turing supposes that what drives diversity in behavior is the number of instructions (contained within programs) that the machine can store for execution. If the only limit on machine behavior relative to human behavior is storage capacity of the machine in question, then all human behavior can necessarily be represented by instruction tables. This is an ambitious claim to make, considering that we don’t even yet have adequate cognitive theories that might be formalized into machine tables designed to produce these behaviors.

    Turing also addresses the objection that a machine “cannot be the subject of its own thought”. He writes, “‘the subject matter of a machine’s operations’ does seem to mean something, at least to the people who deal with it.” This brings up the problem of a lack of intentionality from the machine’s perspective. Humans may interpret symbols used in computations and thus render them meaningful. For computation to approach the realm of thought, it is necessary that the subject matter of the computation can be acknowledged by that which is doing the computation. Surely a primary part of human cognition involves the awareness of what our thoughts are about. If the machine itself cannot be somehow made aware of this, I would not recognize it as thinking.

    ReplyDelete
  4. Computing Machinery and Intelligence

    Although we have thoughts, all we can tap into is that we know how it feels to have thoughts. In class we've discussed that it's not about knowing whether a machine thinks or feels since our only judgement on the matter would be based on observing what it does. With this in mind, the Turing Test should have less to do with arithmetic/computation and more to do with deciding whether the machine feels (i.e. does it do human-like things). If we define feeling as something outside the physical realm, is this something that we can ever test for? Furthermore, if the way I feel is in some way connected to my sensorimotor dynamics and my cognitive architecture would "feeling" manifest itself the same in a machine made of different tissue, which learns slightly differently and has some knowledge that it is different than humans? Would it even act the same way so as to pass the revised understanding of the Turing Test?

    "In the process of trying to imitate an adult human mind we are bound to think a good deal about the process which has brought it to the state that it is in. We may notice three components.
    (a) The initial state of the mind, say at birth,
    (b) The education to which it has been subjected,
    (c) Other experience, not to be described as education, to which it has been subjected."

    I agree that, in order to fit the criteria of a robot that would pass the Turing Test, it would need to have a capacity to learn from its environment and develop in this way. I feel like this would seriously hinge of the robot sharing some degree of basic physicality with humans (especially in area of human anatomy that help us with non-verbal communication such as eyes, facial muslces, hands to gesture etc), which is something that Turing rejects earlier in his paper.

    ReplyDelete
  5. "I believe that in about fifty years' time it will be possible, to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

    Since this week’s reading is about the Turing Test, I think it would make sense to talk about one of the allegedly (email version) TT-passing chatterbot. There has been a news story last year titled “Turing Test Success Marks Milestone in Computing History”, in which the University of Reading declared that the Turing Test has been passed for the very first time in an event that they’ve organized. The TT-passing program simulated a 13-year-old boy from Ukraine. This is a rather smart move. Pretending to be a 13-year-old would justify the program making any kind of “common sense” mistakes, because it is perfectly understandable that a boy that young might not know certain things the adults take for granted. The fact that it’s a boy from Ukraine justifies some grammatical and ideological mistakes/differences. Combined with a large enough database full of real conversation bits from actual human beings, it’s not hard to image that the program would pass the criterion (30% of interrogators think it’s human after conversing with it for five minutes, same as Turing’s criterion) that the organizers of the event had set. Now, a kid sib might ask: so it’s an intelligent machine, right? Why didn’t people make a big deal about this? Well, first off, this program is purely computational. It’s more like a software than a machine, which can be installed and run on any modern computer. And we know that, without some kind interpreter/cognizer (made up by not just codes, but an assembly of dynamic structures), it’s not very likely that a purely computational machine will ever make sense of all that it’s processing. In other words, this program does not have any meta-operational awareness: it neither understands nor feels anything when it converses with the interrogators. (By “understanding” here I mean that the program was not aware of the bigger picture; the purpose of it’s conversation.) Putting aside the hard problem for now, the skepticism remains. Not only was this program incapable of feeling, it also did not think, since all it did was merely putting strings of words together in response to the keywords identified in the interrogators’ questions, much like Searle did in the Chinese Room. It makes no sense that a program incapable of thinking (in the sense that it is actively participating in the conversation and piecing together an original answer that is indistinguishable from that of a real human) should ever pass the Turing Test, which is hypothesized by Turing with the aim of answering the question “Can machines think?”. Therefore in my honest opinion, chatterbots do not qualify as intelligent machines, even though many of them may soon be able to pass the email version of the Turing Test.

    ReplyDelete
  6. "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."

    I think that this argument, what Turing calls “The Argument from Consciousness,” is the most important of the dissenting opinions in that it teases apart the difference between, “can a machine think,” and “can we be fooled by a machine.” Namely, as Turing is quick to point out, Jefferson’s statement about mechanisms lack of feeling, “appears to be a denial of the validity of our test.” While there is (logically) no way to know for certain whether a machine has emotions, there is a way to know whether humans can successfully judge whether A or B is a machine within the TT. It seems that what Turing is trying to show here is that the Consciousness Argument is not especially relevant to the TT— arguing about whether or not machines can think is not the same as creating a machine that can imitate.

    Turing uses an example in this section to show a machine response “viva voce.” The machine’s answers could pass for a (rather pedantic?) human question-response interaction, neatly illustrating that the complexities of human conversation allow for enough leeway so that even phrases learned viva voce are deemed appropriate— indeed, acceptable.

    ReplyDelete
  7. Computing Machinery and Intelligence is regarded as one of the most profound works done by Alan Turing. In this paper, he explains what the imitation game is, and the digital computers he thought about, but mainly, he responded to the criticism and arguments against the question he proposed "Can machines think?" I believe he has considered pretty much every objection that he encountered and tried to give explanations regarding to those objections, and I am glad the questions I raised in my first week's commentaries appear in his paper as well. In this commentary, I will comment on two of them: the Mathematical Objection and Lady Lovelace's Objection.

    In the section the Mathematical Objection, Turing writes:
    "There may, of course, be many such questions, and questions which cannot be answered by one machine may be satisfactorily answered by another. We are of course supposing for the present that the questions are of the kind to which an answer 'Yes' or 'No' is appropriate, rather than questions such as 'What do you think of Picasso?' The questions that we know the machines must fail on are of this type."
    I do not regard this part of defense as tenable. Although questions such as "What do you think of Picasso" will fail the machines, these questions do not exist in Gödel's logic systems as well. The propositions or statements in Gödel's logic systems have only true or false value. There are no interrogations or questions in Gödel's logic systems and there's no a third value in Gödel's logic systems. What does that mean? It means there exist a question formed in the way like "Is Picasso's painting skill great?" - not necessarily this question - that the machines will fail.
    In the same section, Turing writes:
    "The short answer to this argument is that although it is established that there are limitations to the Powers If any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect."
    If I understood right, Turing suggests that maybe there's a limitation of human intellect just as the logic systems that Gödel mentioned. However, although it might be true that maybe both of them have limitations, what Gödel succeeded in proving in his incompleteness theorem is that there is something that we know is true, cannot be prove true or false in the logic system. That suggests logic system cannot do what human intellect can do, or in other words, they have a much larger limitation than human intellect.

    In the section Learning Machines, Turing refers to the Lady Lovelace's Objection and writes:
    "Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child."
    He claims that machines can learn, and we can teach them to do better. I totally agree with this idea because machines do learn to better if we teach them. However, there is a difference between children's brain and a notebook. A notebook cannot write something by itself; someone else needs to hold a pen and write something on this notebook. On the other hand, people often self-reflect and adjust their behaviors; no people are required for this process and people can try to do better by themselves. I hope there is some explanations for this perspective.

    ReplyDelete
  8. In his 1950 paper Turing proposes a three person game consisting of a human, a machine and an interrogator whose task it is to distinguish between the human and the machine based on their answers to questions asked via email. If the machine is indistinguishable from the human it can be taken as an affirmative answer to the question of whether or not machines can think. Or rather, it will show that machines are capable of generating performances identical to those who can think. I’m a bit unclear as to how Turing is defining machine here because he writes, “We also wish to allow the possibility than an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental.” (Allowing this seems odd to me because we’re trying to understand human cognition. Why would we attempt to use something with a mechanism we don’t understand in order to elucidate the mechanism behind something else we don’t understand?) Then Turing goes on to specify that the machine has to be a digital computer. However, when we’ve discussed the Turing test in class I was under the (possibly incorrect) impression that a machine didn’t have to solely function through computing in order to be eligible for the test. So I guess I’m wondering if Turing changed his mind about what exactly qualifies as a machine appropriate for this test after he wrote this paper.

    Turing goes on to refute the reasoning behind a bunch of different objections to the idea of computers being capable of thinking, including the argument from consciousness. “I do not wish to give the impression that I think there is no mystery about consciousness… But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.” The argument from consciousness is essentially that even if a robot/computer can do everything we can do, we cannot be sure that it can feel the same way that we do. The only way to truly know if another human/animal or robot can feel is to actually experience what it is like to be that human/animal or robot. I trust that other humans can feel and I trust that my pets can feel (although it is impossible to know either of those things for sure) so presumably if a robot passed the Turing test I would also trust that it could feel.

    ReplyDelete
  9. "The criticism that a machine cannot have much diversity of behaviour is just a way of saying that it cannot have much storage capacity. Until fairly recently a storage capacity of even a thousand digits was very rare. The criticisms that we are considering here are often disguised forms of the argument from consciousness."

    I find Turing's earlier explanation of machines and errors of function vs. conclusion to be satisfying in showing how easily the argument from various disabilities may be entangled with, or merely disguised forms of the argument from consciousness. That some certain disability should demarcate the presence of thinking is understandable, in reference to human performance; that the dearth of such behaviour in a machine would indicate the absence thereof is not, as Turing says, a realistic criticism of storage capacity, but seems rather to veer towards confusion regarding intentionality. Assuming a machine can be the subject of its own thinking, it could be said the circumstances of its errors of conclusion differ from those of humans in that the latter's are unintentional in the course of finding the correct answer, while the machine purposefully provides its incorrect answer; therefore it is not "thinking" as a human would. However, this is take the mismatch of a human's ultimate intention (to correctly answer the question) for that of the machine's (not, in fact, to provide an answer suiting the question, but actually to imitate a human) for that of the presence of thought within them. In ascribing this correctly to a question of intention, we have now arrived at the argument of consciousness which, as Turing shows, is easily subjected to the imitation game.

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete
  11. "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain."

    The child brain changes so rapidly and involves such a high degree of complexity that I believe we can rule out the notion that "there is so little mechanism in the child brain that something like it can be easily programmed". How would we account for and model the rapid learning and development that takes place both physically (e.g., synaptic pruning) and intellectually (e.g., learning how to spell)? How would we ever ensure that the complexities of human development were represented such that a machine might pass the 6-year-old Turing test before passing the 7-year-old Turing test before passing the 8-year old Turing test and so on. It seems to me that aiming to build a machine that functions on par with adults may actually be easier due to the relative stability of the adult mind and body.

    I am also not convinced by Turing's assertion that a complete and sufficient education process can take place without giving machines the ability to go out and interact with the world as a human would. His example of Helen Keller did little to sway my thinking on this matter as she was clearly highly capable. To my knowledge, the only non-average aspects of Helen Keller's abilities were that she could not see or hear. She could, however, manipulate objects, walk, speak (after learning), smell, and experience the world through touch along with her other capacities. She was also born with sight and hearing -- it was not until she came down with an illness at 19 months of age (presumably after much learning had taken place) that she stopped hearing and seeing. As such, the life of Helen Keller could be used to counter Turing's own argument: learning to do many of the behaviours associated with typical human life took substantially longer for Keller even though the vast majority of human capabilities were available to her throughout her education.

    More importantly, Keller was a human (i.e., she should pass the Turing test if it is properly designed), so why is her life being used as a case-and-point for the argument that certain machines can or cannot learn to pass the Turing test? Can a human fail the Turing test? If so, what does that say about the inherent biases of the Turing test and how should this test be modified to more accurately represent human intelligence in all it's forms? Surely one would not make the argument that a living person does not display intelligence on par with another human being simply because they display such intelligence in unexpected ways.

    ReplyDelete
  12. Part 1: (Due to length)
    While I can see why we can attempt to define cognition as having passed the Turing Test, I am not certain that it is enough. Nor am I convinced that over-simplifying the human mind to a set of programs that can be carried out by a machine, is enough to create cognition.
    In describing a "human computer" Turing says: "The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail. We may suppose that these rules are supplied in a book, which is altered whenever he is put on to a new job." Yet, if this is supposed to imitate a human mind, is this to say that there are rules that are consistently followed in our minds? From personal experience, I quite often have memories that come up with no discernible trigger, I fail to complete a simple task that I've done a million times, and I learn new ways of completing identical tasks. So this rigidity is quite stifling. This can apparently be resolved with randomness, or random generators that allow for a wider range of possible outputs. I find this problematic. A machine cannot generate true randomness. There is a limit to such an algorithm. While our minds cannot generate random sequences either, we cannot say for certain if the mechanism of the mind has the ability to generate true randomness. Furthermore, saying that randomness is akin to freewill sounds absolutely ridiculous. Actions of free will such as getting a drink or doing activities for pleasure or perhaps pursuing a new hobby, are not randomly generated and often do not require an input (or trigger for a sequence of operations to begin). This though is not really my main concern. (I must note that the state of things in the realm of decision making in artificial intelligence doesn't give me much confidence. Machines cannot make decisions based in feelings, particularly when they go against logic.)

    Toward the end of the article, Turing mentions a set of objections. I found something that bothered me when he discussed the objection from disabilities. When speaking of the machine's accuracy giving away the fact that it is a machine, Turing gives the solution: "The reply to this is simple. The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator." But this is just a trick created by programmers. Is the machine really thinking? This contrived set of rules may be enough to pass the Turing Test, but is the Turing Test enough to prove that the machine is cognizant? One could potentially create a machine that can scan the entire internet for conversations and be able to give responses just from there, passing a Turing Test, but that's just like posing a question to Google and clicking "I'm feeling lucky".

    ReplyDelete
  13. This comment has been removed by the author.

    ReplyDelete
  14. Part 2

    I also have objections to his attempt to refute Lady Lovelace’s objection. “She states, ‘The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform’.” I find myself continuously returning to this thought. While Turing and Hartree discuss that learning is possible in machines, it is qualified by the fact that it must be conditioned. Learning for a machine requires a certain amount of feedback, or else nothing can be adjusted and therefore stored and learned. But can we quantify all feedback? I believe that sometimes feedback comes in both good (like learning a lesson) and bad consequences and sometimes we will repeat our mistakes because the potential good managed to outweigh the memory of the bad.
    Yet when we talk about feedback in learning (particularly in humans) I always recall the poverty of the stimulus argument from linguistics. If language is essential to cognition (which I agree it is) then how is it learned? And if it’s inborn as a set of parameters that get set with time being exposed to language, this is not teaching. There is no action on the part of the teacher. Furthermore, you can try to force syntax on a child all you want, they will make the mistake over and over, no matter how much you correct them, until one day they don’t make the mistake.

    Lastly, I must mention the plasticity of the human brain. If we were to say, delete a part of code from a computer, the code becomes unreachable and it also is unlikely to compile properly. Yet the mind is capable of continuing to function, often quite well. Perhaps there are techniques that can be created in order to handle this with programming such as self-writing code of some sort, but doesn’t it seem that with every caveat, the amount of programming required becomes exponentially greater?

    I believe the Turing Test is important to the discussion of the potential imitation of cognition in machines and I understand it’s value, but I am still not entirely convinced that it can reach beyond the theoretical plane. Theoretically there validity there, but I cannot help but consider the potential practical applications.

    ReplyDelete
  15. One question that doesn’t seem to be addressed in the paper is the issue of semantic analysis. For the machine to correctly understand the interviewer’s question it would have to have a program designed for complete conservation of the meaning of uttered or written sentences, and not basic recognition of words. This means recognizing adequately the presented words according to an understanding of the context they occur in, to distinguish between homonyms, correct identification of subject/object relations and other types of ambiguities. This in turn implies that the semantics of natural language has been formalized in such a way that can be then translated into the computer’s language. This fine tuned understanding that humans usually have seems to be quite difficult to recreate in machines. But Turing (1950) bases most of his arguments on the fact that the apparent limits to the faculties of machines are only due to their limited storage capacity, which is only a temporary state as he is assuming this will increase rapidly in the future. I think to this we must add the necessity for adequate programming. In this sense the article made me reflect a lot about how ‘teaching’ a machine to behave in some way is first and foremost an observation of how we think we lead our mental processes. It is more conscious, at least about the form, than the education we normally provide to children, as we presuppose some faculties they possess.

    ReplyDelete
  16. Turing discusses the possibility of a machine that starts out child-like and eventually grows and learns. While this is a fascinating concept, he mentions a reward/punishment system for learning that I don't understand. How can this be applied to a machine? I can try and comprehend reward in that the robot keeps running smoothly and receiving commands but how does one convey error? How can you punish a robot? I suppose this relates back to the class discussion in which we were asked whether or not we would kick a robot. In the end, it comes down to the question of whether or not the robot feels. How else would it perceive error? It is no longer running. Does it loop back? Does it stop receiving commands? We know when it has made an error but in addition to feeling, does it have awareness? The robot does not know it made a mistake so it could simply go on receiving commands and carrying out actions. While this might be a theory to explore further, I think we must first find an alternative to behaviorist learning.

    ReplyDelete
  17. "How can the rules of operation of the machine change? They should describe completely how the machine will react whatever its history might be, whatever changes it might undergo. The rules are thus quite time-invariant. This is quite true. The explanation of the paradox is that the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity. The reader may draw a parallel with the Constitution of the United States."

    Turing, in proposing a hierarchy of rules in which some may be changed and others are impenetrable, is positing something similar to Pylyshyn's cognitive architecture. That is, he is arguing that human behaviour has a set of core unchangeable functions and a set of more malleable functions that rise out of the core functions and experience.

    In the building of a Turing machine, the builders would have to suss out which human abilities are of the permanent kind and which can be learned or fiddled with. To incorrectly categorize a rule would have an enormous effect on the behaviour of a learning machine. For example, should a logical rule like 'If A implies B, and B implies C, then A implies C' be included as a permanent rule? Although this rule is commonly used by humans, humans certainly do not act logically all the time. Would the Turing machine be able to disobey this permanent rule in the way that humans sometimes do?

    ReplyDelete
  18. It is tempting yet frightening to imagine a world in which we have designed ‘things’, biological or otherwise, that seem to think like we do. While some people look deeper and deeper at the physical reality of humans to understand how we think, Turing takes an alternative route to answer the same question.

    “We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?”
    - Turing (1950)

    Instead of trying to recreate a whole entity that is entirely indistinguishable from a human, Turing decides that so long as its physical appearance doesn’t matter, and this entity can produce outputs that would trick a person into believing they are speaking to another person, then we have a thinking machine.

    The beauty of this distillation of ‘thinking’ is that it simplifies the problem to its bare bones: forget giving a machine synthetic skin – if it always responds convincingly, we have the answer to our problem! All we have to do now is remember how we built this machine so that we can decide what the successful ‘thinking’ mechanism looks like. This does not mean the mechanism will be the same as our human thinking mechanism, but that wasn’t what we were asking. The whole idea is to reverse-engineer thinking until we have some sort of design that ‘thinks’, which could be whole new mechanism, different from our own! The consequences of finding a successful design would open a door to a new world of understanding. Some would say this experimentation is impinging on hubris, but that is because do we really want our ‘unique’ and ‘brilliant’ capacities to be shared with cold metallic machines? In any case, this sort of finding would not go unnoticed.

    ReplyDelete
  19. This comment has been removed by the author.

    ReplyDelete
  20. ''Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lost of blanks sheets... Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.''

    It seems here that Turing is underestimating the complexity of the infant or child mind, treating it as a notebook of blank sheets onto which anything can be written. As he himself points out, the structure of the child machine/brain is the hereditary material, i.e. a mechanism created by an unknowable amount of interactions over an immense amount of time that we call evolution or evolutionary forces. Thus the task of creating a learning machine is a task of consolidating millions of years of evolutionary (admittedly a slow process) forces into a small window of scientific experimentation. Turing admits this saying ''one may hope, however that this process will be more expeditious than evolution.''

    If the process of creating a learning machine, or a child machine is a concentrated version of evolution I wonder how much this process will tell us about how the mind works. It may allow us to create a robot which can pass the TT, but it may leave us wanting for insight into how and why the computer is able to do what it does.

    Finally Turing again seems to ignore dynamic properties (i.e. cognitive faculties that interact with the real world) as crucial to the human mind eg. seeing and the ability to walk. His use of Helen Keller as a justification for ignoring these human faculties is not sufficient as Helen Keller relied heavily on her sense of touch to communicate and think and understand. Computers then must also have some sort of dynamic or interactive property in order to be learning and thinking machines. Otherwise they are simply running virtual realities/simulations that will eventually lose relevance in our entropic and ever changing world..

    ReplyDelete
  21. Like the title of the first chapter of the paper, Turing Test is an imitation game. Its purpose is to test if machine can achieve a state where it is indistinguishable from a real human. However, the point of Turing Test is not to create a machine that completely mimics human's appearance and behavior, but to mimic human in an intellectual level.

    The Test is conducted by placing a machine (the interrogator wouldn't know that it's a machine) behind a wall, and a human interrogator would ask questions from the other side of the wall. The machine would try to "provide answers that would naturally be given by a man." The answer can be of the form of text on paper, words on screen, or even answers written down by a human assistant. And if the human interrogator cannot tell whether it's a machine or a human answering the questions, then the machine would pass Turing Test.

    It's worth noting that the purpose of Turing Test is not to build machines that can "think" like we do. As the example given by Turing's paper, a man would do poorly at arithmetic, which computer's excels, but a machine could mimic this behavior by INTENTIONALLY add more errors and waste more time when giving the answer, thus disguise as a human. Is the machine thinking like we do? No. But like Turing said in the paper, We "should not be troubled by this objection." Personally, I think that's a very good point. And it's the direction of many artificial intelligence projects out there today, Apple's artificial assistant, Siri, for example. The aim of those projects is to mimic human's interaction, and thus provides a more friendly way of human-machine interaction. But under the hood, the machine is still running instructions and solving problems in a linear way.

    In the paper, Turing also provides his idea of digital computer, which, isn't much different from how present-day computer works. And he also dealt with some contrary views on Intelligent Machine. One of those is Godel's theorem (no logical system can prove it's completeness and consistency at the same time.), and thus the best logical machine will either give some answers wrong or it won't have answers to something. Turing's response is that we human are sometimes wrong as well, and we cannot prove our correctness either. So being complete and correct won't be necessary to pass the Turing Test.

    At the last sections of the paper, Turing also proposed how we can approach in building a machine that can will pass the Turing Test: we can start by building a machine that can learn. There would be some kind of reward and punishment mechanism to help the machine to learn. He further expand that: "An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside". So the way the machine learns could be very different from how we learn. We learn by developing new connections among neurons inside our brain. But the machine may learn by making up new algorithm that would give answers that closely resembles those given by us.

    ReplyDelete
  22. In this paper, Turing describes a way to find out whether machines can think. He outlines the "Imitation game" or what has come to be known as the Turing Test (TT). The TT is set up as such: a judge must try and tell which of two "people" is a machine or real person. The two people are in separate rooms from the judge and have only written/electronic communication (therefore it is a test of intellectual abilities of the machine and not its visual properties). If the machine's behaviour is indistinguishable from the person's, or in other words the judge can't reliably tell who is whom/which is which, then the machine passes the test and can be said to be thinking.

    Turing outlines which machines should be allowed to take part, saying that only digital machines, i.e. engineered computers consisting of a store (memory), executive unit (instructions) and control (making sure the instructions are followed), should be allowed (and not clones of men, for example). Turing then outlines the requirements for a universal computer that can compute infinitely large problems, and therefore should be able to successfully pass the test.

    Turing also goes through and argues back against oppositions to his argument. The most interesting is the mathematical argument which points to the limits of discrete-state machines. Godel's theorem states that a consistent system must be incomplete, meaning that it has certain statements that cannot be proved within that system. If everything can be proved on the other hand, the system must be inconsistent and has a contradiction. Turing sums this up nicely when saying, "It states that there are certain things that such a machine cannot do." He then makes a counter-argument by stating, "In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on."

    Another argument is from consciousness. "According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking." Turing points out that the "mysteries of consciousness" are important but not really relevant to his test. The "Disabilities Argument" is trumped by the realization that the limitations of technology are only temporary.

    Then there is "Lady Lovelace's Objection", that "a machine can "never do anything really new," among several others from neuroscience. A final argument that I found interesting was the "Learning Machines" argument, where the human mind cannot be simulated since it is the sum of all its experiences thus far (i.e. both nature and nurture can't be simulated). Turing says, "In the process of trying to imitate an adult human mind we are bound to think a good deal about the process which has brought it to the state that it is in. We may notice three components. (a) The initial state of the mind, say at birth,
(b) The education to which it has been subjected, (c) Other experience, not to be described as education, to which it has been subjected. " He divides the problem into the education process and the child's programme. A connection can be drawn with the later theory of Universal Grammar when Turing says, "Alternatively one might have a complete system of logical inference "built in." " Finally, Turing suggests that a random element would be useful to include in a machine, especially for problem-solving where there are many solutions.

    ReplyDelete
  23. This comment has been removed by the author.

    ReplyDelete