Connect with us

Tech

Hands-on with an Alpha build of Google Maps’ Augmented Reality mode – TechCrunch

Published

on

Hands-on with an Alpha build of Google Maps’ Augmented Reality mode – TechCrunch

I think most of us have had this experience, especially when you’re in a big city: you step off of public transit, take a peek at Google Maps to figure out which way you’re supposed to go… and then somehow proceed to walk two blocks in the wrong direction.

Maybe the little blue dot wasn’t actually in the right place yet. Maybe your phone’s compass was bugging out and facing the wrong way because you’re surrounded by 30-story buildings full of metal and other things that compasses hate.

Google Maps’ work-in-progress augmented reality mode wants to end that scenario, drawing arrows and signage onto your camera’s view of the real world to make extra, super sure you’re heading the right way. It compares that camera view with its massive collection of Street View imagery to try to figure out exactly where you’re standing and which way you’re facing, even when your GPS and/or compass might be a little off. It’s currently in alpha testing, and I spent some hands-on time with it this morning.

A little glimpse of what it looks like in action:

Google first announced AR walking directions about nine months ago at its I/O conference, but has been pretty quiet about it since. Much of that time has been spent figuring out the subtleties of the user interface. If they drew a specific route on the ground, early users tried to stand directly on top of the line when walking, even if it wasn’t necessary or safe. When they tried to use particle effects floating in the air to represent paths and curves (pictured below in any early prototype) a Google UX designer tells us one user asked why they were ‘following floating trash’.

The Maps team also learned that no one wants to hold their phone up very long. The whole experience has to be pretty quick, and is designed to be used in short bursts — in fact, if you hold up the camera for too long, the app will tell you to stop.

Firing up AR mode feels like starting up any other Google Maps trip. Pop in your destination, hit the walking directions button… but instead of “Start”, you tap the new “Start AR” button.

A view from your camera appears on screen, and the app asks you to point the camera at buildings across the street. As you do so, a bunch of dots will pop up as it recognizes building features and landmarks that might help it pinpoint your location. Pretty quickly — a few seconds, in our handful of tests — the dots fade away, and a set of arrows and markers appear to guide your way. A small cut-out view at the bottom shows your current location on the map, which does a pretty good job of making the transition from camera mode to map mode a bit less jarring.

When you drop the phone to a more natural position – closer to parallel with the ground, like you might hold it if you’re reading texts while you walk — Google Maps will shift back into the standard 2D map view. Hold up the phone like you’re taking a portrait photo of what’s in front of you, and AR mode comes back in.

In our short test (about 45 minutes in all), the feature worked as promised. It definitely works better in some scenarios than others; if you’re closer to the street and thus have a better view of the buildings across the way, it works out its location pretty quick and with ridiculous accuracy. If you’re in the middle of a plaza, it might take a few seconds longer.

Google’s decision to build this as something that you’re only meant to use for a few seconds is the right one. Between making yourself an easy target for would-be phone thieves or walking into light poles, no one wants to wander a city primarily through the camera lens of their phone. I can see myself using it on the first step or two of a trek to make sure I’m getting off on the right foot, at which point an occasional glance at the standard map will hopefully suffice. It’s about helping you feel more certain, not about holding your hand the entire way.

Google did a deeper dive on how the tech works here, but in short: it’s taking the view from your camera and sending a compressed version up to the cloud, where it’s analyzed for unique visual features. Google has a good idea of where you are from your phones’ GPS signal, so it can compare the Street View data it has for the surrounding area to look for things it thinks should be nearby — certain building features, statues, or permanent structures — and work backwards to your more precise location and direction. There’s also a bunch of machine learning voodoo going on here to ignore things that might be prominent but not necessarily permanent (like trees, large parked vehicles, and construction.)

The feature is currently rolling out to “Local Guides” for feedback. Local Guides are an opt-in group of users who contribute reviews, photos, and places while helping Google fact check location information in exchange for early access to features like this.

Alas, Google told us repeatedly that it has no idea when it’ll roll out beyond that group.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Google Assistant Actions up 2.5x in 2018 to reach 4,253 in the US – TechCrunch

Published

on

By

Google Assistant Actions up 2.5x in 2018 to reach 4,253 in the US – TechCrunch

In addition to competing for smart speaker market share, Google and Amazon are also competing for developer mindshare in the voice app ecosystem. On this front, Amazon has soared ahead — the number of available voice skills for Alexa devices has grown to top 80,000 the company recently announced. According to a new third-party analysis from Voicebot, Google is trailing that by a wide margin with its own voice apps, called Google Assistant Actions, which total 4,253 in the U.S. as of January 2019.

For comparison, 56,750 of Amazon Alexa’s total 80,000 skills are offered in the U.S.

The report notes that the number of Google Assistant Actions have grown 2.5 times over the past year — which is slightly faster growth than seen on Amazon Alexa, whose skill count grew 2.2 times during the same period. But the total is a much smaller number, so growth percentages may not be as relevant here.

In January 2018, there were 1,719 total Google Assistant Actions in the U.S., the report said. In 2017, the number was in the low hundreds in the beginning of the year, and reached 724 by October 2017.

Voicebot also examined which categories of voice apps were popular on Google Assistant platforms.

It found that three of the 18 categories accounted for more than one-third of all Google Assistant Actions: Education & Reference; Games & Fun; and Kids & Family.

The Education category topped the list with more than 15 percent of all Actions, while Games & Fun was 11.07 percent and Kids & Family was 9.29 percent.

Local and Weather were the least popular.

On Alexa, the top categories differ slightly. Though Games & Fun is popular on Google, its Alexa equivalent — Games & Trivia — is the No. 1 most popular category, accounting for 21 percent of all skills. Education was second most popular at around 14 percent.

It’s interesting that these two top drivers for voice apps are reversed on the two platforms.

That could indicate that Alexa is seen to be the more “fun” platform, or one that’s more oriented toward use by families and gaming. Amazon certainly became aware of the trend toward voice gaming, and fanned the flames by making games the first category it paid developers to work on by way of direct payments. That likely encouraged more developers to enter the space, and subsequently helped boost the number of games — and types of gaming experiences — available for Alexa.

Voicebot’s report rightly raises the question as to whether or not the raw skill count even matters, though.

After all, many of the Alexa skills offered today are of low quality, or more experimental attempts from developers testing out the platform. Others are just fairly basic — the voice app equivalent of third-party flashlight apps for iPhone before Apple built that feature into iOS. For example, there now are a handful of skills that turn on the light on Echo speakers so you can have a nightlight by way of the speaker’s blue ring.

But even if these early efforts sometimes fall short, it does matter that Alexa is the platform developers are thinking about, as it’s an indication of platform commitment and an investment on developers’ part. Google, on the other hand, is powering a lot of its Assistant’s capabilities itself, leaning heavily on its Knowledge Base to answer users’ questions, while also leveraging its ability to integrate with Google’s larger suite of apps and services, as well as its other platforms, like Android.

In time, Google Assistant may challenge Alexa further by capitalizing on geographic expansions, but for the time being, Alexa is ahead on smart speakers as well as, it now seems, on content.

Continue Reading

Tech

Amazon’s Echo Wall Clock is back on sale after connectivity fix

Published

on

By

The Echo Wall Clock was first announced by Amazon in September and started shipping just before the holidays in December. Just over a month after the clock was first made available to buy, Amazon decided to pull it because of problems with Bluetooth connectivity. That feature is essential to the device’s function, as it needs to connect to another Echo device in order to operate with voice controls. With the fix, users will once again be able to set alarms and timers via Alexa that will be displayed on the 60 LED lights around the edge of the clock’s face.

Continue Reading

Tech

Audi helps you avoid red lights by suggesting speeds

Published

on

By

Speed suggestions and TLI are available as part of an Audi Connect Prime feature on 2017 and newer models outside of the A3 and TT. You’re still limited to using them in certain areas, however. TLI is currently available in 13 urban regions, including Dallas, Denver, Gainesville, Houston, Kansas City, Las Vegas, Los Angeles, New York (White Plains), Orlando, Phoenix, Portland, the San Francisco Bay Area (Palo Alto and Walnut Creek) and Washington, DC.

The technology could become more useful in the future, though. Future TLI upgrades might use a car’s automatic stop/start system to restart the engine when a red light is turning green, and a navigation tie-in could plan routes that minimize stops. Think of this as another small step toward autonomous cars. You might still have to take the wheel, but computers are minimizing many of the little annoyances.

Continue Reading

Categories

Recent Posts

Like Us On Facebook

Trending