My avatar

Some screenshots and a Powerpoint ppt of my stay in Second Life

secondlife-postcard

Freedom and ambiguity of representantion

Advertisements

New article out!

https://www.academia.edu/8348881/The_Process_of_Exchange_from_Phenomenological_Perspective

This short essay will be an attempt to analyze the exchange of money and commodities in terms of framework offered by phenomenology. Marx’s Capital is an utterly phenomenological work in the Hegelian sense of this notion. Marx conceptualizes phenomenology as a method of exposing the real-world phenomena that is prior to abstract philosophical inquiry. In the first volume of Capital Marx introduces explanatory categories of the German Idealism to “purely” economical and social issues. This sense his analysis is close to the one found in Michel Foucault’s The Order of Things. Foucault, who is mainly preoccupied with understanding the establishment of certain subject (in this context the modern economic subject), deals here with the notion of wealth and elaborates changing relations between money and prices between 15th and late 17th century, as well as such issues as mercantilism, utility and creation of value. There are significant differences between Marx’s and Foucault’s approach. Whereas Foucault’s analysis is oriented towards the hermeneutics and deconstruction of the notion of exchange as a constitutive activity of the subject, Marx is mainly preoccupied with the description of the activity of exchanging and its consequences. However, even though conclusions of Capital and The Order of Things differ significantly, the method of analysis reveals many similarities. Thus, both texts operate with a structure of deeper understanding of exchange as a multi-layer process of signification, accumulation, and transformation. The essay is an attempt to briefly analyze all of these functions of the exchange process as well as to indicate a new interpretation that arises from the framework offered by both authors.

FROM TYPEWRITERS TO DECISION-MAKERS

the article was first published on Biweekly.com: http://www.biweekly.pl/article/1879-from-typewriters-to-decision-makers.html

In November 25, 2008, “The New York Times” published an article by Cornelia Dean that went largely unnoticed around the world. “A Soldier, Taking Orders From Its Ethical Judgment Center” was based on an interview that Dean conducted with Dr. Ronald C. Arkin. In it, Dr. Arkin describes his work for the U.S. Army, which involves designing robot drones, mine detectors, and sensing devices. He also mentions that his greatest dream is to construct an autonomous robot that would be capable of acting on its own on the battlefield, making decisions without taking orders from humans. He claims that work on the technology has been going on for several years.

Thanks to GNU Free Documentation License:
http://www.gnu.org/licenses/fdl.html
While the concept itself may sound mad, it is in fact neither new nor surprising to people who have participating in similar projects for many years. The idea of creating systems that would assist us in making strategic military decisions and help us solve problems on the battlefield, as well as send armies of robots to war, is deeply rooted in the post-World War II history of the West. The fifties and sixties were the era when the idea enjoyed its greatest popularity, at the height of the of Cold War arms race between NATO, led by the United States, and the Soviet Union and Warsaw Pact countries. It was then that cybernetics was born in the laboratories of American universities. This field of science, which takes its name from the Greek word kybernán (to steer or to control), studies methods of controlling complex systems based on the analysis and feedback of information, in the belief that more advanced parts of the globe demand effective models for strategic planning and risk management in order to protect themselves from enemy invasions. As Phil Mirowski notices in Cyborg Agonists, game theory and operations research (OR), which lay the groundwork for cybernetics, were greatly popular in the 1950s because they offered the tantalizing hope of creating a unifying theory for organizing the post-war world: synergy between dynamically developing scientific fields, a stable economy, and the rational management of mankind’s growing resources.

It seemed natural that man, faced with significant advances in technology in a world threatened by the unknown, would be forced to work with allies that he himself had created ― machines. The debate was about the nature of that relationship. The first major project was a concept created by Alan Turing. In it, a human was to be paired with a computing machine. The human was responsible for management, while the machine, as the “more intelligent” partner, was to analyze data and present potential solutions. Other solutions were proposed as well, ones completely different from Turing’s imposed division of roles. One of them was formulated by the neuroscientist and information theorist Donald MacKay, who was the first to introduce Darwinism into cybernetics. Aside from the standard OR instrumentarium, his descriptions of machines includes such words as “adaptation” and “purpose”. MacKay rejected the idea that humans are composed of subsystems that could be replicated outside the body, nor did he accept the notion that human action and thought (and thus the actions and thoughts of a machine) were imaginable outside man’s natural environment. He proposed that scientists focus on the purpose of actions and on analyzing behavior that would enable us survive. In the context of the Cold War, MacKay’s concept singlehandedly revolutionized the notion of the enemy ― the other. He wanted to teach the machine something that humans, for obvious reasons, could never comprehend: he wanted them to think that “the other is me”. Unfortunately, the concept was never fully realized in practice.

A decade later, the lack of quantifiable results led to a reevaluation of the concept of creating a perfect fighting machine that would replicate many human functions while remaining intellectually superior. Subsequent anthropomorphic robot projects would grind to a halt as soon as the machine was expected to learn, instead of just performing calculations or tasks. The U.S. Army responded by gradually reducing funding for universities contracted to conduct research on “the terminator”. But just as the source of government funding was drying up, the burgeoning field caught the interest of American businesses, known for their investments in effective solutions rather than ambitious ones. But the business world did not demand that the machine become a “better human”. The 1970s thus became an booming era for a particular component found in lethal autonomous robots, namely decision support systems (DSS).

Simply put, these systems rely on “if–then” logic. Most consist of three basic parts: a database, which stores the system’s knowledge, an inference engine, which serves as the brain of the system, and a user interface. The decision-making process in a DSS assumes that a good decision can only result from a proper decision process. In other words, if the mechanism functions correctly, then the decision must be correct. This transparent logic suggested that DSSs could be useful in business sectors that saw great development after the war: finance and health care, as well as administration and taxes. But the more successfully these DSSs handled simple, routine tasks, the more was expected of them. As opposed to their technological ancestor, the typewriter, these machines were expected to run the office, not just help out.

Among the decision-making systems in use today, there is a group known as “expert systems” that enjoy the significant trust of society. This group is unique in that, according to many specialists, it has come to resemble humans. Expert systems, which are more or less similar to a DSS, differ from them in that they are capable of “sparking their imaginations”, just like humans. What this means is that they are able to function in complex systems about which they do not have complete knowledge. Instead of analyzing an enormous number of parameters, they take small steps in testing various solutions previously acquired in similar situations. In order to achieve the main goal, these systems set intermediate goals based on tests, and complete these checkpoints before running subsequent procedures. But because of their elasticity, expert system are sometimes incapable of identifying obviously incorrect scenarios ― or at least scenarios that humans would consider incorrect.

Despite these shortcomings, expert systems have been flooding the service market for the past two decades. Their quiet, barely noticeable presence can be detected in health care, online commerce, and banking. Over the past five years (more or less since Dr. Arkin began building his “mechanical soldier”) some have begun proposing expert system applications in the field of business consulting and even ethics and psychology. As early as the late 90s, some scientists predicted that expert systems capable of learning could not only lead executives in making business decisions, but even someday serve an important educational role by training future generations of employees. But as opponents of DSSs are quick to point out, employees and resources aren’t quite the same thing. How could a machine possibly assist in making the decision to pollute a pond with toxic waste, or to fire an employee with a bad performance record? Such proposals have provoked understandable opposition. Omar E.M. Khalil noted in his 1993 bookArtificial Decision-Making and Artificial Ethics: A Management Concern that expert systems cannot make independent decisions involving ethics, or even assist humans in making such decisions, because such systems lack emotions and values, which makes them completely inadequate in assessing a situation. Daniel Dennet, a one-time friend and advocate of cyborgs, also spoke out against expert systems, arguing that they cannot manage like humans because they do not live like humans. The warnings of these opponents appear to be falling on deaf ears. It remains a fact that many people already working with such systems and making decisions with their help prefer the collective responsibility it offers to putting their own name on the line. They are convinced that “the system’s decision” is more rational than any they could make themselves.

It’s interesting that a sector as young as the IT industry has managed to develop, almost independently and in such a short time, a new, ethical professional role (along the lines of a doctor, manager, soldier, and perhaps someday even a lawyer), the competencies of which are defined by how closely they work with the system. Unlike the poignant story in Bladerunner , where a replicant longs to become a human, this one features a “replicant” teaching a human how to be human, subjecting him to the Voight-Kampff test à rebours. But perhaps the irony is unfounded. After all, self-improvement techniques are growing increasingly popular, and the role of the therapist is being replaced by that of the coach. On the other hand, a “sensitive programmer” from Stanford University has been teaching his pet robot to pick up on mechanisms of repression identified by psychoanalysts, which can be detected and visualized through MRI technology. In situations such as these, it is in fact difficult to decide who should be teaching whom.

translated by Arthur Barys

ROBOT, THE STUPID

the article was first published on Biweekly.com: http://www.biweekly.pl/article/4553-robot-the-stupid.html

Aiko and Kenji

People, who are interested in cyborgs, robots, androids and all the freaky futurist, technology-related ideas usually navigate the web in a search for updates on the progress in the field. And there they meet Aiko. She is clothed in a silicone body, weighs 30 kg, measures 152 cm and is a woman-android created by Canadian-Vietnamese designer Le Trung. Aiko speaks fluent English and Japanese and is very skillful at cleaning, washing windows and vacuuming. She reads books and distinguishes colours, knows how to learn and remember new things. What is more, when Le Trung tried to kiss her in public, she immediately hit him in the face. Undeterred by her violent behaviour, Le Trung talked about her as his wife. Sometimes however, he referred to her as his child or a project that he would leave behind for posterity.

Le Trung ensures that in a few years Aiko will become very much like a real woman. When this happens, he will win a life companion, and leave a treasure for the next generations: a fully humane female android. It all sounds very promising, but unfortunately there is one “but”. Aiko cannot walk. She moves in a wheelchair. She is incapable of doing one very simple thing that every mature and healthy human being can easily do. Why is that? Well, firts of all, Le Trung could not afford to finance software for walking. The main reason behind it however is that the current design of Aiko is not compatible with any good software that enables walking. If Le Trung wanted Aiko to walk, he would have to replace her with a new, better model. But then he would lose what he had achieved so far. Tough choice.

Looking for further examples, Kenji comes across. Kenji belongs to a group of robots geared with customised software, enabling emotional response to external stimuli. For Kenji’s creators the biggest success was Kenji’s devotion to the doll that he was kept with in one room for a longer time. When the doll was gone, Kenji immediately started asking where it was and when it would come back. He missed the doll. When it returned, he was hugging it all the time. Scientists were overwhelmed with this success: they had their first emotional machine! Thanks to the many weeks of iterated behaviour based on complex code Kenji equipped himself with something that can roughly be described as a feeling of tenderness. Carers enjoyed his progress until Kenji’s sensitivity began to be dangerous. At some point, Kenji started to demonstrate the level of commitment specific for a psychopath.

 

It all started one day when a young student appeared at the lab. Her task was to test new procedures and software for Kenji. The girl regularly spent time with him. But on the day when her internship ended, Kenji protested in a rather blunt way: he didn’t let her leave the lab, and hugged her so hard with his hydraulic arms that she could not get out. Fortunately, the student managed to escape when two staff members came to rescue her and turned the robot off. After that event worried Dr. Takahashi – the main carer of Kenji – confessed that the enthusiasm of the research group was premature. What is even worse, since the incident with the girl, every time Kenji is activated, he reacts similarly towards the first encountered person. He  immediately wants to hug the victim and articulates loud his love and affection thanks to 20-watt speaker he’s equipped with. However, Dr. Takahashi does not want to turn Kenji off. He believes that the day will come when, thanks to various improvements, less compulsive Kenji will be able to meet with people without frightening them.

The so-called AI winter, or: “The vodka is good, but the meat is rotten”

Researchers of Artificial Intelligence (in short: AI) often do not show temperance in fostering society’s hope for robotic revolution. On the contrary – they usually announce that every new cyborg will change the world. The situation with the handicapped super-robot Aiko and hyper-emotional Kenji is characteristic of the whole field of AI. There is a good record of failed experiments, investment failures and successes that in the end where not successes at all.

On the other hand, there are very few research areas that faced so many eruptions interspersed with waves of enthusiasm and criticism, resulting in essential financial cuts. In the history of Artificial Intelligence there is a well known term called “AI winter”, meaning a period of reduced funding for research. The term was coined by analogy to the idea of nuclear winter. It first appeared in 1984 as a subject for public debate at the General Assembly of  American Association of Artificial Intelligence. It provided a brief description of the emotional state of the research community centered on AI: the collapse of faith in the future of the field and increasingly difficult to conceal pessimism.

This mood was related to lack of success in machine translation. In the ’60s, during the Cold War, the U.S. government became interested in the automatic translation of Russian documents and scientific reports. In 1954 the U.S. launched a programme of support to build a translation machine. At first, the researchers were very optimistic. Noam Chomsky’s groundbreaking works on generative grammar have been harnessed to improve the process. But scientists were outgrown by the problem of ambiguity and contextuality of language.

Devoid of context, machine committed funny errors. Anecdotal example was a sentence first translated from Russian, and then back into Russian from English: “The spirit indeed is willing, but the flesh is weak” was translated by machine as: “The vodka is good, but the meat is rotten.” Similarly, “Out of sight, out of mind” has become: “The blind idiot”.

 

In 1964, the U.S. National Research Council (NRC), concerned with the lack of progress in AI, created the Advisory Committee on Automatic Language Processing (Alpaca) to take a closer look at the problem of translation. In 1966 The Committee came to the conclusion that machine translation is not only expensive, but also less accurate and slower than the work of man. Having familiarised itself with the conclusions of the Committee and after the release of approximately USD 20 million, the NRC refused further assistance. And this was only the beginning of problems.

Two major AI winters happened in the 1974-1980 and 1987-1993.

In the 60s Defense Advanced Research Projects Agency (DARPA) has spent millions of dollars on AI. The head of DARPA, JRC Licklider, believed deeply that one should invest in people, not in specific projects. Public money lavishly endowed leaders of AI: Marvin Minsky, John McCarthy, Herbert A. Simon and Allen Newell. It was a time when confidence in the development of Artificial Intelligence and its potential for the army were unwavering. Artificial Intelligence ruled everywhere, not only in the realm of economy, but also ideologically. Cybernetics has become a sort of new metaphysical paradigm.

It claimed that soon the ideal machine will surpass human intellectually, but it will still serve him and protect him. However, after nearly a decade of spending without limits that have not led to any breakthrough, the government became impatient. The Senate passed an amendment that required from DARPA direct funding of specific research projects. Researchers  were supposed to demonstrate that their work will soon be beneficiary for the army. That has not happened. Study report conducted by DARPA was crushing.

DARPA was deeply disappointed with the results of scientists working on understanding speech in the methodological framework offered by Carnegie Mellon University. DARPA hoped for a system that could respond to remote voice commands. Although the research team has developed a system that could recognise English, it worked properly only when the words were spoken in a specific order. DARPA felt cheated and in 1974 recalled a three million U.S. dollars grant. Cuts in research funded by the government affected all academic centres in the United States. It was not until many years later that speech recognition tools based on Carnegie Melloon technology finally celebrated their success. The speech recognition market reached a value of 4,000,000,000 USD in 2001.

Lighthill report vs. fifth generation

Situation in the UK was pretty much similar. Decrease of funding for AI was a response to the so-called Lighthill report in 1973. Professor Sir James Lighthill was asked by Parliament to evaluate the development of AI in the UK. According to Lighthill, AI was unnecessary: other areas of knowledge were able to achieve better goals and needed more funding. A major problem was a kind of stubbornness of issues that AI struggled with. Lighthill noticed that many AI algorithms which were spectacular in theory turned to dust in the face of reality. Machine’s collision with the real world seemed to be unsolvable. The report led to the almost complete dismantling of AI research in England.

 

The revival of interest in Artificial Intelligence started only in 1983 when Alvey – was launched. It was a research project funded by the British government and worth 350 million pounds. Two years earlier, the Japanese Ministry of International Trade and Industry has allocated 850 million USD for the so-called fifth-generation computers. The aim was to write programmes and build machines that could carry on a conversation, translate easily into foreign languages, interpret photographs and paintings, in other words: achieve an almost human level of rationality. The British Alvey was a response to the project. However, until 1991, most of the of tasks foreseen for Alvey were not achieved. A large part of them still waiting, and it’s 2013. As with other AI projects, the expectation level was just too high.

But let’s go back to 1983. DARPA again began to fund research in artificial intelligence. The long term goal was to establish the so-called strong Artificial Antelligence (strong AI), which – according to John Searle, who coined the term – was for a machine to become a man. By the way, it’s worth noting that both Aiko and Kenji (both from Japan) are trying to implement this very concept. In 1985, the U.S. government issued a one hundred million U.S. dollars check for 92 projects in sixty institutions – in the industry, universities and government labs. However, two leading AI researchers who survived the first AI winter, Roger Schank and Marvin Minsky, warned the government and business against excessive enthusiasm. They believed that the ambitions of AI got out of control and disappointment inevitably must be. Hans Moravec, a known researcher and Artificial Intelligence enthusiast, claimed that the crisis was caused by unrealistic predictions of his colleagues who kept on repeating the story of a bright future. Just three years later, AI billion industry started to decline.

A few projects have survived the funding cuts. He found himself among them the iDART – combat management system. It proved to be very successful, saved billions of dollars during the first Gulf War. It repaid the cost of many other less successful DARPA’s investments, soothe grievances and gave reasons for pragmatic policies of the agency. Such examples, however, were just few.

Great expectations and the Lisp machines

Lisp history is also significant for AI, but has a rather perverse conclusion. Lisp is a family of programming languages which proved to be essential to the development of artificial intelligence. In general, Lisp is the so-called higher-order language, which is capable of abstraction, and thus can use natural language, spoken by people. Lisp quickly became a popular programming language for artificial intelligence. It was used, for example in the implementation of the programming language, which was the basis  for SHRDLU – groundbreaking computer programme designed by Terry Winograd that understood human speech.

 

When research on AI developed in the 70s, the performance of existing Lisp language systems has become a problem. Lisp as a language is quite sophisticated, and so it proved to be difficult to implement on hardware. This has led to the need for a so-called Lisp machines – computers optimised for processing Lisp. Today, such an action – creating machines to support a single language – would have been unthinkable! At that time though it was quite obvious. To keep up, both Lisp itself, as well as other companies such as Lucid and Franz, offered more and more powerful versions of the machines. However, progress in the field of computer hardware and compilers soon made lisp machines obsolete. Workstations from companies such as Sun Microsystems offered a powerful alternative. Also, Apple and IBM have begun to build PCs that were easier to use. The turning point was 1987 when those other computers have become more powerful than the more expensive Lisp machines. The entire industry worth five hundred million U.S. dollars has been wiped out of users’ memory in a single year. Just as Minsky and Schank said.

However, Lisp decided to rise from the dead. In the 80s and 90s Lisp developers have put a great effort to unify the many dialects of Lisp in a single language. The so-called Common Lisp was indeed a compatible subset of the dialects. Although his laboriously built position was weakened by the rollout of other popular programming languages: C++ and Java. Nonetheless, Lisp is experiencing increased interest from 2000. Paradoxically, it owes its newly-gained popularity to an old medium – the book. The textbook Practical Common Lisp by Peter Seibel published in 2004 was the second most popular item on programming on Amazon. Seibel, like several other authors (a.o. well-known businessman and programmer Peter Norvig), was intrigued by the idea of popularising the language which is considered obsolete. It was a task for them as the introduction of the daily circulation of the pioneers of Hebrew in Israel. It turned out that outdated language has its advantages – it gives developers new look at programming, which makes them much more effective. As we can see, Artificial Intelligence takes strange paths.

Fear of the next winter

Now, in the early twenty-first century technology, when AI has become common, its successes are often marginalised, mainly because AI became obvious to us. Nick Bostrom said even that intelligent objects become so evident that one forgets that they are intelligent. Rodney Brooks – innovative, highly talented researcher and programmer – agrees with that. He points out that despite the general view of the fall of Artificial Intelligence, it surrounds us at every turn.

Technologies developed by Artificial Intelligence researchers have achieved commercial success in many areas, like the machine translation initially condemned to failure, data mining, industrial robotics, logistics, speech recognition and medical diagnostics. Fuzzy logic (that is, one in which the state between 0 and 1 extends to a number of intermediate values) has been harnessed to build automatic transmissions in Audi TT, VW Touareg and Skoda.

The fear of another winter gradually gave way. Some researchers have continued to voice concern that the new AI winter may come with another project too ambitious or too unrealistic promise made to the public by eminent scientists. There were, for example, fears that the robot Cog would ruin barely boosting the reputation of AI. But it did not happen. Cog was a project carried out in the Humanoid Robotics Group at MIT by already mentioned Rodney Brooks, together with multi-disciplinary group of researchers, which included well-known philosopher Daniel Dennett. Cog project was based on the assumption that the level of human intelligence requires experience gathered through contact with people. So Cog was to enter into such interactions and to learn the way infants learn. The aim of the project implementers Cog were, inter alia, (1) to design and create a humanoid face that will help the robot in establishing and maintaining social contact with people, (2) to create a robot that would be able to influence people and objects as a man, (3) to build a mechanical proprioception system, and (4) the development of complex systems of sight, hearing, touch and vocalisation. The very list of tasks shows how ambitious project Cog was! In relation to (surprise, surprise!) lack of expected results, Cog funding was cut in 2003. But it turned out that this time it was not the end of a partially successful project. Cog has built a collective imagination around the idea of something resembling a human being. Same happened with Deep Blue, which, despite some setbacks was an excellent chess champion – and that was all that mattered.

Waiting for spring

Peak of inflated expectations clashed with the deep pit of disappointment – that’s the best way to describe emotions surrounding AI. Looking at the mood swings around artificial intelligence, I gradually changed my idea about the goals of creating an intelligent machine. I have thought for a long time that the humanoid robot has to be a mirror of man. But at some point I started asking myself, why we want the robot to be more perfect than a man? Why we don’t let it fall? Why, when it falls anyway, we stubbornly try again? After all, it is not the case that in a few years technology will solve all the riddles of humanity and create an artificial human being. It will not happen soon. So, is there any other factor that pushes man to pursue his activities in this absurd field? After all, if you think about it, even the name: Artificial Intelligence, is quite grotesque. And I think to myself, the machine is not exactly a mirror, it is rather something that has to remain different, but still needs to surpass us. Thanks to the otherness of a machine human beings will feel safe. We don’t like resemblance, it’s disturbing. This is confirmed by various pop-culture visions of androids and cyborgs acquiring control of the world. Because the machine becomes better than us in things in which we don’t feel like doing and still remains quite different form us, we behave towards it as somewhat pathological parents. We challenge it, demand results, get offended by its failures and then return with a new dose of energy and expectations.

So, to make this long story short, it doesn’t matter that Aiko and Kenji get lost in the human world. They are The Others, they are weird, but they make a beautiful couple at what they do best. He hugs and she punches.