The Internet has been a vital part of people’s lives for decades. An entire generation has grown up never even knowing of a world that wasn’t constantly connected through the Web. And during that time most aspects of the Internet have changed and evolved. But there’s one aspect of the Internet which is almost the same now as it was at the very start. And that piece of technology is the search engine. But a recent article by the MIT Technology Review hints that things might finally be changing.
The article discusses AI and visual search. And it demonstrates just how much this new technology will impact the world. One of the first points it makes is just how old textual search techniques are. This is an especially important point in comparison to how much other aspects of the Internet has grown. Over time people began to depend more and more on the Internet for even fairly basic purchases. In fact, with rapid delivery becoming common it’s not unusual for some people to even do all of their grocery shopping online. But this process lacks one important component in comparison to traditional shopping. People can’t look at an item and find similar things based on appearance. Window shopping is a popular pastime for good reason. People are visually oriented and like to shop based on visual cues. But that’s simply not very feasible within a textual interface.
But some companies have found ways around this problem. They’re working on visual search features, and even involving advanced artificial intelligence into it. One of the best examples of this can be found with a company called Slyce. Like many companies, they saw just how limited textual searches could be. And like many companies they also recognized just how computationally taxing visual search techniques are. Slyce had an innovative method of getting around this problem though. They decided to combine deep learning techniques with cloud based technologies in order to create a new distributed learning system. .
Slyce was able to create a system which runs the most computationally taxing aspects on site. While the image acquisition can be run on either a person’s phone or within a browser that faces a company’s product line. One can think of this as a bit like the connection between brain and eyes. The client software works as the eyes, while the information is processed within Slyce’s facilities. The end effect is that they’ve created techniques which allow advanced image recognition to run on almost any platform. This can be anything from a cellphone to a self hosted server. But what it allows for is the ability to have one’s software actually look at images and make decisions based on that data.