Shining light on Facebook’s AI strategy



In a speech today at Web Summit, Facebook CTO Mike Schroepfer laid out a vision for the role artificial intelligence and machine learning will play in the company’s ambitions to improve global connectivity, technology accessibility, and human computer interaction.

“People want to stay connected and close to other people, so whatever is the best current technology to deploy that is the business we want to be in,” said Schroepfer.

Large companies like Facebook play an incredibly important role in the artificial intelligence and machine learning ecosystem. Their sheer size and ability to corner the market on talent makes almost every strategic decision they make an industry-wide declaration.

Connecting everything to everything

Despite setbacks, like the explosion of Facebook’s satellite aboard a SpaceX Falcon 9 earlier this summer, the company remains steadfast in its goals to better connect the world. It is pursuing a number of infrastructure projects, made possible by machine intelligence, aimed at augmenting both urban and suburban internet connectivity.

Project Aquila, which has been well documented, is designed to bring connectivity to suburban areas. The carbon fibre, solar powered, planes fly in a constellation at altitudes higher than a commercial airliner. Planes remain in communication with each other via laser and communicate with the ground via radio frequency. Facebook is using trained neural networks to identify population centers across Africa that could be targeted by the planes.

For urban connectivity, Facebook is using LiDAR, the same technology behind many self-driving car projects, to map out utility polls. Identifying the location of these polls is critical to optimizing connectivity. Teams at the company are building virtual network graphs on top of the data to create the best data mesh.

FB urban mesh

Using LiDAR data to enable urban connectivity in San Jose

Art first, AI revolution later

For consumers, the easiest way to visually interact with Facebook’s ML work is through Style Transfer. Built on top of a new mobile deep-learning platform called Caffe2Go, the feature lets users capture artistically stylized video footage in real-time. Chris Cox, Facebook’s chief product officer, previously demoed the technology at WSJDLive — showing the audience how they can make their own content look like it was created by Van Gogh himself.

When engineers first began to build Style Transfer, the best they could do with the computational resources of a smartphone was to stylize a small frame within a full-frame video. The version we got to see not only altered the entire frame in real time, but didn’t display any noticeable signs of lag. It’s set to be integrated into the FB app soon and is already being tested in a few countries.

Caffe2Go won’t remain limited to Style Transfer — it holds the key to deploying convolutional neural nets across Facebook’s suite of mobile apps.

“For anything we build on the server, we now have a vehicle to ship it to mobile devices,” noted Schroepfer.

Bringing machine intelligence to the smartphone is just step one. For the time being, the world still doesn’t have an answer for training neural networks on mobile devices. The prospect, coupled with Facebook’s long-term roadmap, makes a compelling argument for a future where the average person could design and train a custom neural net on their own smartphone for daily use.

Betting big on virtual reality

Using today’s cutting edge technology, researchers still had to generate hundreds of artistic stylizations and painstakingly hone in attributes to deliver the best possible Style Transfer experience. With continued research, Facebook hopes to someday leverage our own facial expressions to give us another way to interact with its technology.

Near term, this could manifest itself as a “surprise” filter being laid over a selfie taken with wide-eyes, but improvements in facial tracking could someday let use seamlessly share our emotional state with the technologies around us — perhaps most interestingly while in virtual reality.

Of course, all of this becomes more interesting once corded VR is no more. The next set of VR problems will require leaps forward in computer vision. Facebook has already rolled out new stabilization technology for 360 videos, but there is more to be done. Related technologies like inside-out tracking and speech recognition will also improve the realism of VR experiences.

“We have talked about doing things in the long term in augmented reality,” added Schroepfer. “We would absolutely build devices for it because I think that will be a way you will communicate with AIs in a real-time basis.”

Don’t count sheep, label training sets

AI errors Facebook

Image captioning struggles to identify that the plane in the right image is crashing

Even with a concrete vision about how investment in artificial intelligence can bolster Facebook’s products and services, it’s important to remember that our cutting edge AI research leaves a lot to be desired.

Machine intelligence thrives on patterns, and unfortunately our world is full of an almost limitless number of outliers. Regardless of the barriers, Facebook has little choice in prioritizing AI as its competitors pour billions into beating the company to the next great breakthrough — though the mere fact that everyone is all in on the race is perhaps what makes it so interesting.

Featured Image: John Mannes



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *