Jarvis stark. Marvel Cinematic Universe

Most users know that the Siri system is considered the most popular personal assistant and question-and-answer technology on iOS gadgets. Fortunately, Siri is not the only system available on the market. Thus, fans of science fiction and comics created by Marvel are offered a personal assistant JARVIS from the movie "Iron Man".

If the owner of the device has seen the film “Iron Man,” then he probably knows Tony Stark’s butler, whose name is Jarvis. Consequently, the user will be able to resort to the help of a virtual servant on his own portable device. In addition, the JARVIS program is a unique development that uses the voice and image of the Jarvis character.

The JARVIS utility begins with the usual audio instructions for using and managing the specified tool. Once setup is complete, the user will need to indicate their gender (so that the virtual assistant can correctly address the owner of the device). In addition, here you will have to set the unit of measurement for the basic temperature conditions (in particular, degrees Kelvin, Fahrenheit or, of course, Celsius).


A detailed list of instructions can be found by touching the icon located in the upper corner of the display. In this case, all commands must begin with the address “Jarvis” and usually contain one word (for example, “Jarvis, weather forecast”). JARVIS can also notify the device owner about future meetings and display the current time. You can also create a variety of audio reminders in the program.

It is important to note that for owners of optical discs with the blockbuster movie "Iron Man", the JARVIS utility provides additional features. For example, the user can easily control the playback of the corresponding movie using this virtual butler.


Helpful information: if you ask your virtual assistant a question: should I buy a BMW 740 (http://www.bmw-avtoport.ru/auto/7/), then his answer with one hundred percent probability will be in the affirmative! By the way, you can purchase a BMW seventh series right now on the most favorable terms for yourself! All you need to do for this is visit the website www.bmw-avtoport.ru. Citizenship Occupation

Butler

Teams and organizations Allies Enemies

All the Avengers' enemies

Fictional biography

War hero Jarvis served as a British pilot. air force. Having moved to the USA, he became a butler in the house of Howard and Maria Stark, and after their death he continued to work for their son Tony.

Outside of comics

A television

Movie

  • As Stark's butler, Edwin Jarvis makes a brief appearance in animated film Ultimate Avengers.
  • Jarvis appeared in a more significant role in Ultimate Avengers 2, where he was voiced by Fred Tatasicore.

Marvel Cinematic Universe

In the 2008 feature film Iron Man, JARVIS JARVIS) appeared as an artificial intelligence butler at Tony Stark's mansion, and also downloads into his armor for cyberpathic communications. He is able to joke, sarcastically speak about the recklessness of his creator, but despite this he is concerned about the well-being. Paul Bettany, who voiced "Jarvis", admits that he had little to no idea what his character was about and only agreed to do the voice work as a favor to his friend Jon Favreau, the film's director. Bettany voiced Jarvis in the second film, Iron Man 2, The Avengers, and Iron Man 3.

On January 6, 2015, the series “Agent Carter” was launched, telling about the adventures of Peggy Carter, Captain America’s girlfriend. The series is connected to the entire Marvel Cinematic Universe (Hayley Atwell and Dominic Cooper reprise their roles as Peggy Carter and Howard Stark, respectively, as in the feature-length Captain America: The First Avenger). Peggy's main partner becomes Howard Stark's butler Edwin Jarvis, who, thanks to his loyalty to Stark, turns out to be Peggy's reliable partner. Jarvis is played by Englishman James D'Arcy.

Computer games

  • Edwin Jarvis appears in Marvel: Ultimate Alliance, voiced by Phillip Proctor. He appears in Stark Tower and also has dialogue with Deadpool, Iron Man, Spider-Woman and Captain America.
  • JARVIS appears in the Iron Man video game full-length film, and is voiced by Gillon Stevenson. He acts as a source of information for the player, informing him of any messages he needs to be aware of.
  • In the sequel game Iron Man 2, JARVIS is voiced by Andrew Chaykin.

Books

Write a review of the article "Edwin Jarvis"

Notes

Links

  • (English) on the website Marvel Universe Wiki
  • (English) on the website IMDb
  • (English) on the website Absolute Astronomy
  • (English)

Excerpt characterizing Edwin Jarvis

“He seems to be sleeping, mom,” Sonya answered quietly. The Countess, after being silent for a while, called out again, but no one answered her.
Soon after this, Natasha heard her mother's even breathing. Natasha did not move, despite the fact that her small bare foot, having escaped from under the blanket, was chilly on the bare floor.
As if celebrating victory over everyone, a cricket screamed in the crack. The rooster crowed far away, and loved ones responded. The screams died down in the tavern, only the same adjutant’s stand could be heard. Natasha stood up.
- Sonya? are you sleeping? Mother? – she whispered. No one answered. Natasha slowly and carefully stood up, crossed herself and stepped carefully with her narrow and flexible bare foot onto the dirty, cold floor. The floorboard creaked. She, quickly moving her feet, ran a few steps like a kitten and grabbed the cold door bracket.
It seemed to her that something heavy, striking evenly, was knocking on all the walls of the hut: it was her heart, frozen with fear, with horror and love, beating, bursting.
She opened the door, crossed the threshold and stepped onto the damp, cold ground of the hallway. The gripping cold refreshed her. She felt barefoot sleeping man, stepped over him and opened the door to the hut where Prince Andrei lay. It was dark in this hut. In the back corner of the bed, on which something was lying, there was a tallow candle on a bench that had burned out like a large mushroom.
Natasha, in the morning, when they told her about the wound and the presence of Prince Andrei, decided that she should see him. She did not know what it was for, but she knew that the meeting would be painful, and she was even more convinced that it was necessary.
All day she lived only in the hope that at night she would see him. But now, when this moment came, the horror of what she would see came over her. How was he mutilated? What was left of him? Was he like that incessant groan of the adjutant? Yes, he was like that. He was in her imagination the personification of this terrible groan. When she saw an obscure mass in the corner and mistook his raised knees under the blanket for his shoulders, she imagined some kind of terrible body and stopped in horror. But an irresistible force pulled her forward. She carefully took one step, then another, and found herself in the middle of a small, cluttered hut. In the hut, under the icons, another person was lying on the benches (it was Timokhin), and two more people were lying on the floor (these were the doctor and the valet).
The valet stood up and whispered something. Timokhin, suffering from pain in his wounded leg, did not sleep and looked with all his eyes at strange phenomenon girls in a casual shirt, jacket and eternal cap. The sleepy and frightened words of the valet; “What do you need, why?” - they only forced Natasha to quickly approach what was lying in the corner. No matter how scary or unlike a human this body was, she had to see it. She passed the valet: the burnt mushroom of the candle fell off, and she clearly saw Prince Andrei lying with his arms outstretched on the blanket, just as she had always seen him.
He was the same as always; but the inflamed color of his face, his sparkling eyes, fixed enthusiastically on her, and especially the tender child’s neck protruding from the folded collar of his shirt, gave him a special, innocent, childish appearance, which, however, she had never seen in Prince Andrei. She walked up to him and with a quick, flexible, youthful movement knelt down.
He smiled and extended his hand to her.

For Prince Andrei, seven days have passed since he woke up at the dressing station of the Borodino field. All this time he was in almost constant unconsciousness. The fever and inflammation of the intestines, which were damaged, in the opinion of the doctor traveling with the wounded man, should have carried him away. But on the seventh day he happily ate a slice of bread with tea, and the doctor noticed that the general fever had decreased. Prince Andrei regained consciousness in the morning. The first night after leaving Moscow it was quite warm, and Prince Andrei was left to spend the night in a carriage; but in Mytishchi the wounded man himself demanded to be carried out and to be given tea. The pain caused to him by being carried into the hut made Prince Andrei moan loudly and lose consciousness again. When they laid him on the camp bed, he lay for a long time with eyes closed motionless. Then he opened them and quietly whispered: “What should I have for tea?” This memory for the small details of life amazed the doctor. He felt the pulse and, to his surprise and displeasure, noticed that the pulse was better. To his displeasure, the doctor noticed this because, from his experience, he was convinced that Prince Andrei could not live and that if he did not die now, he would only die with great suffering some time later. With Prince Andrei they were carrying the major of his regiment, Timokhin, who had joined them in Moscow with a red nose and was wounded in the leg in the same Battle of Borodino. With them rode a doctor, the prince's valet, his coachman and two orderlies.
Prince Andrey was given tea. He drank greedily, looking ahead at the door with feverish eyes, as if trying to understand and remember something.
- I don’t want anymore. Is Timokhin here? - he asked. Timokhin crawled towards him along the bench.
- I'm here, your Excellency.
- How's the wound?
- Mine then? Nothing. Is that you? “Prince Andrei began to think again, as if remembering something.
-Can I get a book? - he said.
- Which book?
- Gospel! I have no.
The doctor promised to get it and began asking the prince about how he felt. Prince Andrei reluctantly, but wisely answered all the doctor’s questions and then said that he needed to put a cushion on him, otherwise it would be awkward and very painful. The doctor and the valet lifted the greatcoat with which he was covered and, wincing at the heavy smell of rotten meat spreading from the wound, began to examine this terrible place. The doctor was very dissatisfied with something, changed something differently, turned the wounded man over so that he groaned again and, from the pain while turning, again lost consciousness and began to rave. He kept talking about getting this book for him as soon as possible and putting it there.
- And what does it cost you! - he said. “I don’t have it, please take it out and put it in for a minute,” he said in a pitiful voice.
The doctor went out into the hallway to wash his hands.
“Ah, shameless, really,” the doctor said to the valet, who was pouring water onto his hands. “I just didn’t watch it for a minute.” After all, you put it directly on the wound. It’s such a pain that I’m surprised how he endures it.
“It seems like we planted it, Lord Jesus Christ,” said the valet.
For the first time, Prince Andrei understood where he was and what had happened to him, and remembered that he had been wounded and how at that moment when the carriage stopped in Mytishchi, he asked to go to the hut. Confused again from pain, he came to his senses another time in the hut, when he was drinking tea, and then again, repeating in his memory everything that had happened to him, he most vividly imagined that moment at the dressing station when, at the sight of the suffering of a person he did not love, , these new thoughts came to him, promising him happiness. And these thoughts, although unclear and indefinite, now again took possession of his soul. He remembered that he now had new happiness and that this happiness had something in common with the Gospel. That's why he asked for the Gospel. But the bad position that his wound had given him, the new upheaval, again confused his thoughts, and for the third time he woke up to life in the complete silence of the night. Everyone was sleeping around him. A cricket screamed through the entryway, someone was shouting and singing on the street, cockroaches rustled on the table and icons, in the autumn a thick fly beat on his headboard and near the tallow candle, which had burned like a large mushroom and stood next to him.

Fictional biography

War hero Jarvis served as a pilot in the British Air Force. Having moved to the United States, he became a butler in the house of Howard and Maria Stark, and after their death he continued to work for their son Tony.

Outside of comics

A television

Movie

  • As Stark's butler, Edwin Jarvis appears briefly in the animated film Ultimate Avengers.
  • Jarvis appeared in a more significant role in Ultimate Avengers 2, where he was voiced by Fred Tatasicore.

Marvel Cinematic Universe

In the 2008 feature film Iron Man, JARVIS appeared as an artificial intelligence butler in Tony Stark's mansion, and is also loaded into his armor for cyberpathic communication. He is able to joke, sarcastically speak about the recklessness of his creator, but despite this he is concerned about the well-being. Paul Bettany, who voiced "Jarvis", admits that he had little to no idea what his character was about and only agreed to do the voice work as a favor to his friend Jon Favreau, the film's director. Bettany voiced Jarvis in the second film, Iron Man 2, The Avengers, and Iron Man 3.

On January 6, 2015, the series “Agent Carter” was launched, telling about the adventures of Peggy Carter, Captain America’s girlfriend. The series is connected to the entire Marvel Cinematic Universe (Hayley Atwell and Dominic Cooper reprise their roles as Peggy Carter and Howard Stark, respectively, as in the feature-length Captain America: The First Avenger). Peggy's main partner becomes Howard Stark's butler Edwin Jarvis, who, thanks to his loyalty to Stark, turns out to be Peggy's reliable partner. Jarvis is played by Englishman James D'Arcy.

Computer games

  • Edwin Jarvis appears in Marvel: Ultimate Alliance, voiced by Phillip Proctor. He appears in Stark Tower and also has dialogue with Deadpool, Iron Man, Spider-Woman and Captain America.
  • JARVIS appears in the Iron Man video game based on the film, voiced by Gillon Stevenson. He acts as a source of information for the player, informing him of any messages he needs to be aware of.
  • In the sequel game Iron Man 2, JARVIS is voiced by Andrew Chaykin.

Books

Notes

  1. Edwin Jarvis (undefined) . Marvel. Retrieved January 21, 2018.
  2. DeFalco, Tom. The Marvel Encyclopedia. - Dorling Kindersley, 2006. - P. 150. -
  • The Mark 3 suit was designed by Phil Sanders and Adi Granov (comic book artist). Iron Man). The suit itself was then designed by Stan Winston Studios.
  • Robert Downey Jr. , after filming wrapped on June 26, 2007, was involved in an 8-month special effects job to accurately capture all of Iron Man's movements.
  • Pepper Potts uses an LG KS20 phone. Tony Stark LG VX9400. James Rhodes LG KG810.
  • The final fight is reminiscent of the fight between RoboCop and the metal drug dealer Kane from the film RoboCop 2 (1990).
  • This is the first film to be fully financed by Marvel Studios.
  • Jon Favreau decided to film the film in California because he felt that too many superhero films were being made on the East Coast, particularly New York.
  • Nicolas Cage and Tom Cruise were interested in playing the role of Iron Man. In particular, Tom Cruise planned to produce the film and star in the title role.
  • According to already established tradition, comic book author Stan Lee starred in a cameo: he plays a man whom main character mistakenly mistakes Playboy creator for Hugh Hefner.
  • Sounds repeatedly in the film theme song from the cartoon "The Invincible Iron Man" (1966) (casino, bed scene and the ringtone on Rhodes' cell phone).
  • Tony Stark is based on inventor Howard Hughes.
  • While filming the tank scene, the filmmakers broke the camera.
  • Paul Bettany voiced all of Jarvis's lines in 2 hours.
  • Initially, the main villain was supposed to be the Mandarin, but Jon Favreau considered him too fantastical and outdated. If the Mandarin appeared in the film, it would be as an Indonesian terrorist.
  • Clive Owen and Sam Rockwell were considered for the role of Tony Stark.
  • By this film a game of the same name was made.
  • Gwyneth Paltrow agreed to the role of Virginia Pepper Potts on the condition that the filming with her participation would take place close to home. As a result, Paltrow got to the site in 15 minutes.
  • While preparing for the role of Tony Stark, Robert Downey Jr. focused on the image of the American billionaire, inventor and philanthropist Elon Musk, who, among other things, is the owner or founder of companies such as PayPal, SpaceX and Tesla Motors.
  • Rachel McAdams was Jon Favreau's first choice to play Pepper Potts, but she turned down the offer.
  • The film's script was not completely finished when filming began, as the filmmakers were more focused on the story and action, so the dialogue was often improvised. Jon Favreau admitted that this made the film feel more natural. Some scenes were shot with two cameras to capture dialogue improvised on set. Robert very often asked to reshoot the same scene several times, because he wanted to try something new. On the other hand, it was hard time for Gwyneth Paltrow, who tried to match her actions and words with Robert's performance, so as not to go out of character and get into the moment, since she never knew what he might say.
  • Tony Stark's computer system is called D.J.A.R.V.I.S. It is a tribute to Edwin Jarvis, Howard Stark's butler. It was changed to artificial intelligence to avoid comparisons with Bruce Wayne's butler Alfred Pennyworth.
  • It took about seventeen years for the film to finally get developed. Universal Pictures was originally set to produce the film in April 1990. They later sold the rights to 20th Century Fox. Fox later sold the rights to New Line Cinema. Finally, Marvel Studios decided to take their own creation and develop it themselves.
  • Attention! The following list of facts about the film contains spoilers. Be careful.
  • After the credits there is a short bonus scene. It features Samuel L. Jackson as Nick Fury.
  • In the scene where Pepper discovers Tony in the workshop removing his armor, you can see Captain America's shield, which Tony Stark used to prop up his installation in the second part of Iron Man.
  • In the first minutes of the film, when Tony Stark falls into the hands of terrorists and a ransom is demanded for him, in the background you can see a stretched canvas with symbols that were used in the film Iron Man 3 (2013) before the start of the Mandarin's broadcast.
  • Jon Favreau wanted Robert Downey Jr. played the role of Stark because he felt that the actor's background was exactly the kind of experience needed for such a role. He commented: “Robert's best and worst moments all happened in the public eye. He had to find inner balance to overcome obstacles that extended far beyond his career. This is Tony Stark. Robert brings depth to the character that goes beyond an ordinary hero comics with problems in high school or a fiasco with a girl.” Favreau also felt that Robert could make Stark a "lovable jerk" while still portraying a genuine emotional journey once his character captured the audience's attention.

Mark Zuckerberg created the artificial intelligence Jarvis as from Iron Man. He runs the Facebook CEO's home, plays music for him, and shoots blank gray T-shirts out of a special cannon. We answered Zuckerberg's top questions about artificial intelligence and translated his original post about the Jarvis development process.

Zuckerberg set a goal a year ago to create artificial intelligence

At the beginning of each year, Mark Zuckerberg sets goals for the coming 12 months. In 2010, that goal was to learn Mandarin (a dialect Chinese language), and in 2015 - read two books a month.

This year, Zuckerberg promised himself to create artificial intelligence, like from Iron Man. As planned, he was supposed to control the lighting, cameras and music in the house.

This Monday, December 19, the founder of Facebook announced the completion of the project and shared post in which I described the process of creating Jarvis(artificial intelligence named after Iron Man's assistant).

What can Jarvis do?

Pretty much everything you'd expect from artificial intelligence connected to " smart home" It turns lights and music on and off, makes toast, and opens doors (thanks to facial recognition technology). Also, Jarvis, using a special modified gun, shoots Zuckerberg with his signature gray T-shirts.

Jarvis's functions also include less practical abilities. For example, Zuckerberg taught him a simple game: he or his wife Priscilla ask the artificial intelligence “who should be tickled,” and Jarvis randomly answers “Max” or “Beast” (the names of their daughter and dog, respectively).

How did Zuckerberg create Jarvis?

Zuckerberg himself in his post divided the process of creating Jarvis into five large blocks: connected home, natural language, face and object recognition, Facebook Messenger bot and speech recognition.


First, to function, Jarvis must have access to a connected system of devices throughout the house (lights, cameras, appliances).

Secondly, artificial intelligence must understand natural language, that is, queries like “play something from Kanye West.”

Third, Jarvis needs to recognize people's faces in order to notify Zuckerberg about guests or determine the location of family members in the house.

Fourth, Zuckerberg wanted to be able to talk to Jarvis not only from one device, but from any phone. To do this, he decided to create a chatbot on Facebook Messenger.

Finally, Jarvis also had to be able to recognize oral speech and also answer with your voice.

“Artificial intelligence is both closer and further than we think”

As the head of Facebook noted, his main goal in the process of creating Jarvis was to learn more about the state of artificial intelligence in modern world. According to him, AI can do impressive things - control cars, cure diseases and discover planets.

However, the problem with modern artificial intelligence lies in the people themselves. We don’t yet know what intelligence is, and until we answer this question, we won’t be able to create real AI.

Who voices Jarvis? (updated)

Zuckerberg shared a video showing aspects of Jarvis's work. From the video it also becomes clear that the artificial intelligence is voiced by actor Morgan Freeman.

This October, Zuckerberg asked on his Facebook page about who he should invite to voice Jarvis. People started recommending him Morgan Freeman, renowned scientist Neil deGrasse Tyson and, yes, Iron Man himself Robert Downey Jr.

The actor responded to this comment and seemed to agree to the offer - on the condition that Paul Bettany (who voices Jarvis in the Iron Man films) receives the fee.

However, in the end Freeman took up the work.

Translation of Zuckerberg's post in which he explains the Jarvis development process

My personal challenge for 2016 was to create a simple artificial intelligence that would run my house - just like Jarvis in Iron Man.

My goal was to learn about the state of artificial intelligence - and it turns out we've come much further than many people realize (we're still a long way from the finish line, though). Challenges like this always result in me learning and learning more than expected, and this project was no exception: it helped me understand the internal Facebook engineering system that we use at the company, and also gave me general idea about “smart houses”.

Over this year I have built a simple AI that I can talk to on the phone and computer: it controls my home, lighting, temperature, music, security; he recognizes my habits and tastes; he learns new words and concepts; plus, he even entertains Max [Zuckerberg's daughter - approx. ed]. It uses several artificial intelligence techniques including processing natural language, speech and facial recognition, and machine learning are all written in Python, PHP, and Objective C. In this post, I'll explain what I built and what I learned along the way.

Video of Zuckerberg demonstrating Jarvis's work

Let's get started: Connecting the house

In some ways, this challenge was easier than I expected. In fact, my running goal (running 365 miles in 2016) took even longer. But one aspect that brought me a lot of difficulties was the process of bringing everyone together various systems in my house.

Before I built the AI, I needed to write code that would connect all these systems written in different languages programming. We [the Zuckerberg family] use Creston for lights, thermostat and doors, Sonos with Spotify for music, Samsung for TV, Nest for cameras and, of course, Facebook for my work. In most cases, I had to reverse engineer the APIs for these systems to get them to respond to my commands to turn on the lights or turn on the music.

Then the question arose that many of these devices are not connected to the Internet. Some of them can be turned on and off using the Internet, but this is not enough. For example, I had a lot of difficulty finding a toaster that, with the power off, would allow the bread to drop in and automatically start toasting when turned on. I ended up buying an old toaster from the 1950s and attaching a plugged-in switch to it. I also modified the feeder for Bist [Zuckerberg's dog] and the gun for gray T-shirts in the same way.

In order for assistants like Jarvis to control everything in our homes, we need more connected devices, and the industry needs to develop common APIs and standards for devices to talk to each other.

Natural language

Once I wrote the code that would allow my computer to control my entire house, the next step was communication: I wanted to talk to the computer and the house the same way I talk to anyone else. It was a two-step process: first I taught it to understand text messages, and then I added voice response and speech-to-text capabilities.

I started with simple keywords like "bedroom", "lights", "on": the computer looked for these words in a sentence and, if necessary, turned on the lights in the bedroom. It soon became clear that he also had to learn synonyms - like how living room and family room mean the same thing in our house. This meant I had to teach him to learn new words and concepts.

Understanding context is important for any AI. For example, when I tell my [AI, Jarvis] to turn on the air conditioning in “my office,” it means something completely different than when Priscilla [Zuckerberg's wife] asks him to do the same thing. How many various problems came up because of this! Or, for example, if you ask him to dim the lights or play a song without specifying a specific room, he needs to know where you are - otherwise the music will start playing in Max's room at the exact moment when she is sleeping. Oops.

Music is more interesting and complex plane for natural language because there are too many artists, songs and albums, and a simple search by keywords does not work. Lights can only be turned on or off, and when you say “play X,” even the smallest variations can mean completely different things. Take for example several queries related to Adele: “play someone like you”, “play someone like Adele”, “play Adele” [play on words in English, in the original queries look like this: “play someone like you”, “ play someone like adele”, “play some adele”]. They sound similar, but each falls into a different category of request. The first asks to play a certain song, the second recommends an artist, and the third creates a playlist of best songs Adele. Through the system of positive and negative reviews, I taught my AI to see these differences.

The more context the AI ​​is given, the better it can handle open-ended queries. Now, if I ask Jarvis to “play music,” he looks through the lists of songs I’ve listened to and, more often than not, chooses exactly what I would like to hear. If he gets the mood wrong, I can just tell him something like “this is not light music, play something light,” and he will immediately categorize the song and correct the request. He also differentiates between me and Priscilla, and gives us individual recommendations. In general, I realized that we use open queries much more often than specific ones.

Object and face recognition

Roughly one-third of the brain is dedicated to vision, and AI has a lot of problems understanding what's going on in a photo or video. These challenges include tracking (eg, is Max awake and crawling around in her crib?), object recognition (is it Beast or the rug in that room?), and face recognition (who is standing in front of the door?).

Facial recognition is a particularly difficult version of object recognition because most people look relatively similar (it's easier for a computer to tell two random objects apart, like a sandwich and a house). But Facebook is very good at recognizing faces to tag friends in your photos. The same technology is suitable for allowing AI to determine which of your friends is at your door.

To do this, I simply installed several cameras on my door, which capture the image from different angles. Today's AIs can't yet identify people by the top of their heads, so having multiple angles ensures that the computer gets an image of a face. I built a simple server that constantly monitors both cameras and performs a two-step process: first, it runs the face detection process (which allows it to determine that a person has approached the door), second, if it finds a face, it runs the facial recognition process ( which allows you to determine who exactly came to the door). Once it has identified the guest, the computer checks against a specific list - if I was expecting this person today, then it lets the guest in and lets me know about his arrival.

This type of visual system in AI is very suitable for a certain number of things: for example, it knows when Max wakes up and starts playing her music or a Mandarin language lesson, or solves the problem of context by knowing what room we are in and responding accurately to open requests like “turn on the light.” Like most aspects of this AI, vision is useful when it informs a broader model of the world, integrating other abilities - for example, knowing your friends and opening the door for them when they arrive. The more context a system has, the smarter it becomes.

Chatbot in Messenger

I programmed Jarvis on my computer, but in order for it to be truly useful, I needed to be able to access it from anywhere. This meant that I needed to use my phone to communicate, rather than the device installed in my home.

I started by creating a Messenger chatbot to communicate with Jarvis because it's much easier than creating a separate app. Messenger has a very simple bot framework that automatically does a lot of things for you - including running on both iOS and Android, supporting text, images, and audio, delivering notifications, and more. You can learn more about the bot framework at messenger.com/platform.

I can write anything to the Jarvis bot, and it will automatically pass it to the Jarvis server and process the request. I can also send audio recordings, and the server will translate them into text form and fulfill the request. In the middle of the day, if I get home, Jarvis texts me about who is there now or what I need to do.

One of the surprises I discovered when creating Jarvis is that when I have the choice between speech and text to communicate with Jarvis, I write to him much more often than expected. There are many reasons for this, but the main one is that it doesn't bother the people around me. If I'm asking for something related to them, like asking them to play music for all of us, then I use voice request, but in most cases I'm more comfortable texting Jarvis. Likewise, when Jarvis communicates with me, I prefer text over voice. This is because speech can be choppy, but text gives you more control over what you want to see. Even when I talk to Jarvis, if I do it on the phone, I prefer that he show his answer.

This preference for text communication over voice is a pattern we also see in Messenger or WhatsApp, where the volume of text messages is growing much faster than the volume of voice messages. This means that future AI products cannot rely only on voice [as, for example, Amazon Echo does] and they should have an interface for personal correspondence. I've always been optimistic about AI bots, but my experience with Jarvis has made me even more confident that we'll be interacting with bots like Jarvis in the future.

Despite my opinion that text will be more important when communicating with future AIs, I still believe that voice is equally important. important role. The most important advantage of voice is that it is faster. You don't have to take out your phone, open the app and start typing - all you have to do is talk.

To enable voice functionality for Jarvis, I needed to build a special application that would constantly listen to what I was saying. The Messenger chatbot is great for many things, but it's not great for constantly monitoring my speech. My own Jarvis app allows me to put my phone on the table and it will listen to me. I can also put multiple phones with the Jarvis app around the house so I can use it from any room.

The idea is similar to the vision Amazon is pursuing with its Echo voice assistant, but in my experience, I've found that I find myself wanting to reach out to Jarvis when I'm out and about. Therefore, having a phone as the main interface instead of a dedicated home device is critically important.

I developed the first version of the Jarvis app on iOS, and plan to make an Android version soon. I haven't made an iOS app since 2012, and one of my main observations is that the tools we've built at Facebook to develop apps like this are very impressive at speech recognition.

Speech recognition technology in Lately has improved significantly, but not a single artificial intelligence can yet understand colloquial speech on the fly. Speech recognition relies on listening to what you say and predicting what you'll say next, which is why structured speech is much easier to understand than unstructured conversation.

Another interesting limitation in speech recognition systems - and machine learning in general - is that they are optimized for specific problems. For example, understanding a conversation between a person and a computer is not quite the same as understanding a conversation between a person and another person. If you train a machine by feeding it data from Google searches where people are talking to the search bar, then that machine will perform worse on a Facebook site where people are talking to each other.

In the case of Jarvis, it's designed for close-range speech recognition, unlike the Echo, which you can talk to from across the room. These systems are more specialized than we think, which means we are far from generalized [AI] systems.

On psychological level When you talk to a machine, you automatically assign more emotional depth to the conversation than when you communicate with it through text or a graphical interface. One interesting thing I found when integrating the voice into Jarvis was that I wanted more humor in him. Partly so he could interact with Max and entertain her, and partly so he could integrate [into our family] better.

I taught him small fun games like the one where me or Priscilla ask him who we should tickle next and he randomly answers "Max" or "Beast." Just for fun, I also threw in some classic lines like “Sorry, Priscilla. I'm afraid I won't be able to do this" [reference to the artificial intelligence HAL-9000 from the Stanley Kubrick film " Space Odyssey 2001"].

There are many more things that can be explored in terms of voice. AI technology is already good enough to make a great product, and it will only get better in the coming years. At the same time, I think the best products are those that you can take with you and use privately anywhere.

Facebook development environment [or some advertising from Zuckerberg - approx. ed]

As CEO of Facebook, I no longer write code for our internal environment. However, I never stopped coding, although I now do it for personal projects like Jarvis. I expected to learn a lot about the state of the art in artificial intelligence today, but I had no idea that I would also learn about what it's like to be a Facebook engineer. In short, it's impressive.

My personal experience with the Facebook code base is likely similar to that of our new engineers. I'm constantly amazed at how well organized the code is and how easy it is to find what you need - whether it's related to facial and speech recognition, a chatbot framework, or iOS app development.

The open source Nuclide packages we built to work with GitHub's Atom make development much easier. The Buck development environment we created to work on big projects, also saved me a lot of time. Our open source artificial intelligence FastText, which classifies text, is also worth a look if you are interested in AI development - and in general, dig into the Facebook Research GitHub repository.

One of our values ​​is to move quickly. This means you have to come here [to Facebook] and build an application faster than anywhere else. You have to come here and be able to use our AI infrastructure and tools to develop things that you would spend much more time on if you were working alone. Creating internal tools that make [software] engineering more efficient is important for any technology company, and we take this issue very seriously. So I encourage you to use our tools too, it won’t hurt anyone.

Next steps

Even though this challenge is coming to an end, I'm confident that I will continue to work on improving Jarvis, as I use it every day and constantly find new features I'd like to add.

In the near future my next steps will be to build an Android application, configure Jarvis voice terminals in more rooms around the house and connect more equipment. I'd love to have Jarvis operate my Big Green Egg [ceramic grill] and help me cook, but that would require more advanced modifications than the T-shirt gun hardware.

Long term, I'd like to teach Jarvis to learn new functions on his own, rather than having to program him for specific tasks each time. If I spent another year on this challenge, I would focus on learning how [machine] learning works.

Finally, it would be interesting to find ways to make [Jarvis] available to the world. I thought about making his code open source, but now it's too tied to mine own home, its technology and network settings. If I ever develop a more abstract shell, maybe I'll release it. Or, of course, I will make it the basis for the development of a completely new product.

conclusions

Developing Jarvis was an interesting intellectual challenge that gave me more experience working with AI tools in areas that are important to our future.

I previously predicted that within 5-10 years we will have AI systems that become more accurate in each of our senses - vision, hearing, smell, etc., including things like language. It's amazing how powerful these tools have already become, and this year has only reinforced that prediction for me.

At the same time, we are far from understanding how learning works. Everything I've done this year - natural language, facial and speech recognition - are all variations of a fundamental pattern of recognition techniques. We know how to show a computer a lot of examples and make it distinguish between these examples, but we still don’t know how to take an idea from one plane and apply it to a completely different one [for example, applying techniques from facial recognition to speech recognition].

As an example, I spent about 100 hours developing Jarvis this year, and I got pretty good at it. good system, who understands me and does a lot of things. But even if I spent another 1,000 hours, I likely wouldn't be able to create a system that learns new functions on its own—that would require a fundamental breakthrough in AI.

IN in a certain sense, AI is closer and further than we imagine. AI is closer in that it is capable of performing very powerful tasks - driving cars, curing diseases, discovering planets and understanding media. Each of these things has a huge impact on the world today, but we still have to figure out what real intelligence is.

Overall, it was a huge challenge. Challenges like these always teach me more than I initially expected. This year I thought I would learn more about AI, but I also learned about smart homes and Facebook's internal development environment. That's what makes challenges like this interesting. Thanks for following me through this challenge and I'm looking forward to the next challenge I'll share in a few weeks.