Few days ago we presented a Mk2 prototype of our projects. My Chef app made it into its functional state and is finally able to recognise voice commands and speak – which is a big success!

Graham Pullin reminded me of Edinburgh-based company called Cereproc which specialises in making synthesised voices. After connecting to them, I received amazing amount of one million credits for their CereVoice Cloud service, which enables my web app to speak with a Scottish voice (right now using “Kirsty”) – thank you very much guys.

You can try the Mk2 prototype here. Works in desktop Chrome only (and asks for permission to use mic each time you should speak)

Design

After making some simple wireframes (no need for too complex app hierarchy since there’s lot of work that needs to be put into the conversational bit), I started to work on the logo for Chef. After generating many concepts, the oblique type with laddle (this article’s header image) won the game.

Logo iteration for Chef

Logo iteration for Chef

Colour-wise, after going through some material about colour psychology, looking on various ways that contemporary brands communicate, I felt attracted to both naturality and friendliness, warmth. Yellow colour plays quite well on the warmth and friendly side, so a shade of it was chosen. When it comes to ubiquitous background colour, I really like to let the content speak, so I was considering dark grey. It only felt too tasteless and innatural. Then I remembered a great brown hue, which I used for earlier Pizza website concept:

Older pizzeria design concept from 2010

Older pizzeria design concept from 2010

And there it was, a great colour pair for Chef’s naturally-based, slightly apetizing friendly tone of voice 🙂

Chef-Identity

Illustrations in the Mk2 are from the previous Mk1 prototype, but will be redone. I am considering to try out some animation as well.

Illustrations accompanying each step of making a Caprese Wrap

Illustrations accompanying each step of making a Caprese Wrap

Technology

My Mk2 prototype uses Web Speech API of Google Chrome. This gives the app a benefit to be run on smartphones with chrome (android only unfortunately – web speech is not present on iOS devices), android tablets and notebooks & desktops with Chrome installed.

The recognition works surprisingly well, if you open console in Chrome, you can see what words does the prototype pick up.

For the front-end transitions, I am using the great jQuery Mobile framework, and learning a lot of Javascript/jQuery on the way! Good times 🙂