Getting to know Google Glass

Share this post

Reading Time: 5 minutes

A colleague at Epic is part of the Glass Explorer programme, and this weekend I was pleased to be able to take a Google Glass home to learn and experiment with. We took the device up to the LT14 show last week and it was a bit of a draw on the stand and was great to introduce people to this new world of wearable technology.

GoogleGlass_MarkA

My first impressions were that this is absolutely not intuitive to set up! It’s not like getting a new iPhone that ‘just works’. This thing takes a bit of time to get to know and understand. That’s fine I guess, as most of our interactions with computers to date have been limited to using mouse or touchscreen as input devices, so you do need to learn the basic user interactions needed for this thing.

Unfortunately though, there is no on-screen tutorial when it’s fired up (bearing in mind that I’m not the first user of the device, but there is no obvious tutorial in the menu system that I could view either). So I had to go to the Google Glass Help site and watch a few videos on my tablet first. This is a bit of a drag to be honest, I really dislike the trend for 5-10 minute instructional videos, I just want to get going.

People who are used to voice input for their computers will probably feel quite at home with Glass, such as Sat Nav and Siri users. I’ve not got any voice input devices myself, although my Android phone has Google voice search, but that’s not really been of interest to me until now. So I had to get over that initial self consciousness of using voice input for the first time. Frankly, saying “OK, Glass” out loud in a social context makes you feel like a complete dick. Extroverts and show-offs may like that, but not me. I can’t really see my opinion changing the more I use it, maybe I’m just not a fan of voice input devices, especially in social situations where they put up significant barriers. A new name has even been coined for these folks: Glassholes.

Anyway, voice control is a major part of the Glass experience, so I got stuck in regardless. While navigating the Glass menu system by voice is very easy, the speech recognition comes unstuck when you reach your contacts directory. My surname turns out to be phonetically similar to a colleague’s, so Glass ended up sending several personal photos that I had intended for Mrs Aberdour to Gavin Beddow, Epic’s head of mobile! I think it was picking up the last two syllables of Aberdour and mistaking that with Beddow. As there is no confirmation before sending messages, and no Stop! Or Cancel! command, you rather annoyingly just have to sit there watching the progress bar as your personal pictures are sent off to the wrong recipient.  Not a good start.

Fortunately you can also control Glass by touch. A tap on the side will open the menu, swipe backwards or forward along the side to scroll through menu options, tap again to select an option. Swipe down to go back one step. Push the button on top to take a photo, push and hold it to take a video. A neat trick is in pushing the button and then tapping the side which takes a photo with a superimposed screenshot of the display, which is called a Vignette. That was all quite easy to pick up, with the assistance of Google Glass Help, which I again had to access from my tablet while I was getting familiar with things.

So, onto the interwebs. There appears to be no way to open a URL directly which is what I immediately wanted to do, but you can do a Google voice search to get you online. However, you can’t search for a URL directly, at least the ones I tried. There were sites I wanted to visit to see what they looked like, but unless I could find my way there via Google Search these efforts were scuppered. It was surprisingly hard to locate some sites via Google Search. Anyway, the upshot is that you need to launch a web page from the Google search results. The speech recognition in Google voice search is extremely hit and miss though, and gives some really bizarre interpretations of your voice commands. Glass itself may be beta, but voice search has been offered by Google on Android devices for a while now, but to be honest it isn’t even fit for purpose as a “Beta”. Given that the whole future of Glass depends on this feature, I think it puts things on pretty shaky ground. I can imagine there are a LOT of Google engineers focusing on speech recognition right now!

Once you get a decent search result you can tap to open it on the device’s web browser. The resolution is 640px wide by 360px high so mobile, responsive sites obviously work best. Sites can scroll but on first sight there appears to be no way to click links. The most intuitive thing to do is to simply use a voice command and read out the link you want to follow, but that didn’t work. I had to Google how to navigate a website on Glass, yet again reverting to my tablet for online help. It turns out you can actually select links using a two-finger tap, which lets you turn your head to scroll left and right, up and down, and select a link using a target icon, then tap again to select it. It’s actually really cool, and Google seem to have nailed it.

I’m focussing too much on the browser anyway, because the overriding feeling you get with Google Glass is that this is not a device for web browsing. Its big strength right now (in Beta form) is as a hands-free camera phone, and the core features seem to revolve around that. Communications are limited to your Google Contacts, though, which is a constraint.

From what I understand though, the more interesting power of this device comes from its location-aware and context-aware services through integration with Google Now, so the next step is to venture outside. That makes me nervous, as this is not my device and it isn’t a great fit over my normal glasses so I worry it may topple off. But I am keen to see more of its location and context-aware potential, as this is where Google Glass will tie into our work at Epic, as a device for learning and performance support. It makes me somewhat uncomfortable to enter into that world, though, because the more you become immersed in the world of Google Now, the more you have to tie everything in your life to your Google Account. In the desktop/tablet/smartphone world my approach is to disable many Google options so that the company can only track the bare minimum of my data. But I think that to get the most out of the context and location-aware services and communications services that Google Glass offers, I’d have to turn all that back on and become Google’s digital serf in doing so. 

Back to the device, in summary it took me about an hour to become confident using Google Glass by both voice and touch control. There is an expectation these days that new gadgets should ‘just work’ without needing instructions, largely thanks to Apple’s incredible advances in user interface design. However, in the case of Glass, I’m quite happy with that investment of time, given the radical departure from the types of computing we’ve been used to in previous decades. As a bonus, it fitted OK on top of my ‘traditional’ glasses too, although not a good enough fit that I’m confident to venture outside. But I’ve been wearing glasses since I was about 10, and this is the first time that I’ve seriously thought about getting contact lenses or correctional eye surgery! If this was my own Google Glass, I think I’d probably do it. I guess that means I’m kind of hooked, or at least wowed by the technology. It just means that I’d have to whore myself out to Google in the process and that, frankly, fills me with dread. Maybe I’ll go and re-read Jaron Lanier’s You Are Not a Gadget and get some renewed perspective on that one before making a decision.


Share this post