Picture this: You see a shirt someplace – on a friend, on a shelf, in a store. You take out your phone, open an app belonging to a major retailer, and grab a photo of it. Instantly, the app recognizes the shirt, and directs you to a page where you can buy one of your own – and the retailer quietly takes note that you’re interested.
This is the promise of Slyce, a Toronto startup that’s developed the technology to intelligently recognize objects, and then use that recognition to connect consumers with retailers’ online stores. In the parlance of the startup world, in which every new product must be explained relative to some older product, it’s “Shazaam for things.”
“What we’re doing is allowing people to snap a photo of any item in the real world. We identify what that item is, and allow you to purchase that item,” says Mark Elfenbein, the company’s chief digital officer.
Except, unlike Shazaam – the popular mobile app that recognizes music and sends people to buy it at the iTunes store – consumers will never see a Slyce app, as such. The company is offering their technology as a kit to work under the hoods of big brands’ websites – a so-called white-label solution. Mr. Elfenbein says Slyce is working with six of the top 20 retailers in the U.S., most of whom are launching in the third quarter of 2014, and remain under non-disclosure agreements that keep them from being named.
The software works by first determining what the object it’s looking at is, and then determining its specific properties. For instance, it must first determine whether the item in front of it is a shirt, or a vase, or a Studebaker. Once it’s narrowed its subject down, it analyzes the image down according to a schema that Slyce’s team has programmed for that object, building what’s called an “attribution model” – a model of an object’s attributes.
In the case of a shirt, it might track not just colour and pattern, but the number of buttons, the distance between them, the number and location of pockets, the shape of the collar, and so on. By breaking down the object this way, Slyce can not only provide a perfect product match, but could compute the most similar item this particular retailer has on offer.
Of course, the system first needs to know what it’s looking for. When a retailer comes on board, Slyce starts by ingesting its visual catalogue. (Fortunately, most large retailers have already built extensive systems to photograph their inventories, for their online stores.) Then, the company builds the models for different classes of objects, if it hasn’t already.
“Every time we enter a new market, there’s a decent amount of building at the front end of this,” says Mr. Elfenbein. Objects in a toy store will have different parameters than ones in a housewares store – but once the company has built a model for making sense of toy cars instead of pans, it can re-use them.
Ultimately, the technology will be built right into retailers’ online stores and apps. But buying products they see around them is just one use for Slyce’s technology. As always, retailers will be paying keen attention to what consumers are photographing, whether they make a purchase or not.
In fact, from the retailer’s perspective, “the data is almost as important as the purchase itself,” says Mr. Elfenbein. The software can just as easily be used to build wish lists, or to let customers build baby or wedding registries.
Slyce is based in Toronto, but is increasingly distributed across North America. The company raised a $4-million friends-and-family seed round last year, and in the last week closed at $10.75-million first round. The Canadian-born, Minneapolis-based Mr. Elfenbein says a lot of that capital has gone into acquiring image-recognition companies, both in Canada and, increasingly, in the Bay Area.
It’s a fast start for a growing company. “It has the potential to revolutionize the way consumers interact with brands,” says Mr. Elfenbein. If the product works as advertised, he might not be overstating it.