You are using an older browser version. Please use a supported version for the best MSN experience.

ToyTalk renames to PullString, repositions as authoring tool for bots

TechCrunch TechCrunch 26/04/2016 Haje Jan Kamps

ToyTalk, the startup that mashed together Barbie and the Internet of Things and let your kids chat away with Thomas the Tank Engine for hours on end, is cashing out on the bot craze with a slight realignment of philosophy — and a new name.

“When working with children,” the company’s CEO Oren Jacob said, “you are beholden to some pretty strict laws, and ToyTalk as a company had to work diligently to ensure we toe the line.”

To do that, the company developed a whole toolset to enable writers to create narratives for children’s toys. The toolset was called PullString, named after the string you can pull on some dolls to make them talk. The name is undoubtedly a nod to the Andy cowboy doll from Toy Story — Jacob did head up Pixar’s technical team as its CTO, after all.

The change in name, then, reflects a change in direction for the company. The ToyTalk brand continues as before, and is the company’s COPPA-compliant brand aiming to make conversation with children’s toys easier. But the wider conversation that’s happening is in the space about conversational interfaces — or “chatbots” if you like — and by design or by accident, that’s where PullString finds themselves. With the tagline “If you can write, you can make a bot,” the company finds itself perfectly positioned to help make bots easier to create, and faster to deploy.

Better scripts for better bots

What a lot of people are missing in the midst of the amazement of Apple’s Siri becoming an expert on everything and Microsoft’s Tay becoming a raging racist holocaust-denier is that for a lot of commercial applications, the interfaces you are talking to aren’t meant to be the be-all, end-all of knowledge.

PullString started its journey making bots for children mostly because their limited vocabularies made them easier to understand (and, let’s be honest, children don’t mind as much if some of the answers they get in return are utter nonsense). But in the process, they also designed a world of rigidly scripted conversations that somehow manage to feel natural. With good reason, too: There are some things a child might ask about that you wouldn’t necessarily want the bot to google and give an answer to.

The magic, then, is in how a chatbot deflects a question. “Maybe that’s something you should talk to your parents about,” for example, is a perfectly fine answer to some of the difficult questions — much like when Siri recommends certain URLs when your line of questioning hits certain keywords.

While the conversational interfaces have developed in leaps and bounds, and the PullString bots can now understand a far wider set of words and phrases, one restraint remains: You still don’t want a free-for-all for children.

“We want a whole lot of people, talking to a whole lot of characters, a whole lot of the time,” Jacob likes to say, before adding, with a wink and a nod, “the best of which are ours.”

The key here is “characters,” which a good way to look at meaningful interactions. If you meet with any random person on the street, you could ask them about literally anything, but you may not get a reply. That’s a good way to think about bots as well. Think about it this way: If you pull up to a drive-through window at McDonald’s and ask about the weather, ordering a pizza or whether or not the holocaust happened, you’re probably not there to engage in a fruitful transaction. For conversational interfaces to work well, these interactions need to happen on a similar plane: Whether you’re talking to a human or a robot at the drive-through window at McD’s, you’ll probably not be talking about pizzas: you’re ordering menu items from the fast-food chain’s menu.

Expect bots to have more targeted tasks — and to stick to them

A great way to experience this is in the company’s Facebook Messenger bot game Humani: Jessie’s Story. In it, Jessie just lost her job and got kicked out of her house, and is knee-deep in trouble. The story is good and worth playing through (but if you can’t be bothered and don’t mind spoilers, Sarah Kessler explains the experience well), but what’s even more interesting is what happens at the far edges of the story, when you try to push the conversation off topic. Jessie simply just doesn’t engage. At times, she’ll simply ignore you when you’re being obnoxious. It definitely feels less interactive when you try to derail the conversation, but on the other hand, when you engage and drive the story forward, you get rewarded by the next twist and turn in the story.

Taking a step back, keeping you on track is what you would expect from a chatbot on a mission, and exactly what you’d hope from every bot: It is there to accomplish a task (completing an order, helping you avoid fraud on your bank account, booking a table at a restaurant), and everything not pertinent to that mission is irrelevant.

PullString, in explaining how their authoring tools work, resulted in a big lightbulb moment — the breakthrough for conversational interfaces isn’t going to be in creating bots that can be pretty good at most things, like Siri. Yes, there’s a place for personal assistants in the bot ecosystem, but by creating carefully scripted responses and walling in the confines of what’s expected of a conversation, you’re able to make much more efficient — and much easier to use — bots.

What PullString has been able to do, then, is abstract the AI and voice-command-style software away from the task at hand, enabling good writers to create phrases, sentences and clarifications to help facilitate efficient conversations. Expect your bank, your online ordering and your drive-through window to get smarter and quicker very soon — and I wouldn’t be at all surprised if a lot of those interactions will be powered by PullString going forward, especially when the company opens up its tools to developers later this year.

More from TechCrunch

image beaconimage beaconimage beacon