An interesting follow-up to the Forbes article that I posted about the “sharing economy”. I really like the idea of communally owned assets where that makes sense but, the more I read it, the more I’m unsure how this can be made to work in relation to car ownership. Sure a group of neighbours can own a pool of vehicles that they share amongst themselves but, given that they likely fall within a similar demographic group, aren’t they all likely to need their cars at roughly the same time i.e. for travelling to work each day?
Car sharing seems to work best when the vehicles are shared within an economically diverse group where car usage is less likely to be spread more widely.
Over the last several years, the idea of collaborative consumption has really taken off. Thanks to startups like Airbnb, Getaround, Taskrabbit and others, people are making more efficient use of their assets or time. The idea that anyone with an extra room to share, or a car they barely use, or spare time or skills that can be better utilized by others — all of it has created a whole new group of marketplaces built on connecting “those who have” with “those in need” on a short-term basis. That’s what the sharing economy is all about.
In a lot of ways, the sharing economy is helping to reduce peak demand for goods and services. That guests can rent rooms on Airbnb during SXSW is a more efficient use of resources than if the Austin hospitality industry decided to build a whole bunch of hotels just to deal with one week of visitors. And Getaround or Relayrides renters are helping to make use of cars that otherwise go unused most of the time.
But the thing about the sharing economy is that, at least when it comes to marketplaces like Airbnb or Getaround, it still relies on a lot of people owning a lot of things. And if we’re talking about true efficiency, it seems to me that we’re going to need to go a step beyond just the owner-renter model for the collaborative consumption market, and into an area that’s based on fractional ownership of goods.
Fractional ownership is not a new idea — vacation time-shares have been around forever, for instance — but it could be applied more broadly and more efficiently in more markets. One prime example is in the way people own and use cars: It’s no surprise that most vehicles go unused 23 or 22 hours out of the day. And the various car- and ride-sharing services are getting users one step closer to not needing their own vehicles, at least in urban areas and at least part of the time.
But what if, instead of most people on my block owning a car that sits parked the vast majority of the time, each of us shared ownership of a vehicle or group of vehicles in the neighborhood. Sure, I can rent nearby neighbor’s cars today on an a la carte basis, but that still requires a person to purchase, pay insurance for, and maintain that vehicle for himself, me, and anyone else who wants to use it. For those of us who don’t own our own vehicles, there’s also the tricky matter of insurance, and who’s to blame or who will cover for an accident that happens in someone else’s car.
On-demand car rental services like Zipcar have gotten us one step closer to answering at least some of those questions. But the infrastructure around Zipcar has its own inefficiencies: It has built its fleet to handle peak demand, and so it’s cars, also, go unused a lot of the time. As a result, it tends to be more expensive than the real sharing economy of car rental startups.
Anyway, I don’t want to rent a car by the hour or by the day, whether it be a neighbor’s car or one from Zipcar. What I really would like is to be able to share a car along with other people in my neighborhood and find a way to finance, manage insurance, and manage booking in a single dashboard. I want to be able to subscribe to a service where I’m paying for access to get around when I want, with insurance (and maybe gas) built in. Where a car sits in a shared lot and is maintained by someone else.
Ultimately, I think this is where the auto industry is headed — or at least where it should go. At some point, U.S. auto manufacturers will likely find that people are buying fewer cars and hopefully holding onto them longer. That sharing economy companies are allowing those who previously owned a car to be able to go without. And when that happens, I think it will make more sense for automakers to set up their own Zipcar-like lots in major cities and to lease access to their vehicles rather than sell them outright.
Of course, the same efficiency model could be applied to other goods: Why should everyone in the suburbs buy their own lawnmowers when they could all use the same, jointly owned piece of equipment? Why build an Airbnb for boats when you can build a platform for fractional ownership of a boat? And the old standby — why Airbnb a vacation home to others when you could have ownership of it with a group of others?
We know that these models can work as long as we have the right tools to manage them. The question is, who’s going to build us this future based on fractional ownership?
It isn’t just us here in the UK who have to deal with higher prices for things then!
An anonymous reader writes “Live outside the U.S.? Tired of paying huge local price markups on technology products from vendors such as Apple, Microsoft and Adobe? Well, rest easy, the Australian Government is on the case. After months of stonewalling from the vendors, today the Australian Parliament issued subpoenas compelling the three vendors to appear in public and take questions regarding their price hikes on technology products sold in Australia. Finally, we may have some answers for why Adobe, for example, charges up to $1,400 more for the full version of Creative Suite 6 when sold outside the U.S.”
Editor’s note: Tareq Ismail is the UX lead at Maluuba, a personal assistant app for Android and Windows Phone that was a Battlefield participant at TechCrunch Disrupt SF 2012. Follow him on Twitter @tareqismail.
The release of Facebook’s Graph Search has raised much discussion among technology pundits and investors. One of the biggest questions surrounding the highly anticipated feature is its availability on mobile.
After all, Facebook CEO Mark Zuckerberg has said on a number of occasions that Facebook is a mobile company. “On mobile we are going to make a lot more money than on desktop,” he said at TechCruch Disrupt SF 2012, adding “a lot more people have phones than computers, and mobile users are more likely to be daily active users.” Facebook understands mobile’s importance, so why wouldn’t it offer Graph Search for Android and iPhone from the start?
It’s simple: Graph Search for mobile would need to incorporate speech, which is a different beast altogether.
Many of the examples given during the Graph Search keynote contained long sentences, which are not easy to type on a mobile device. Think of the example “My college friends who like roller blading that live in Palo Alto.” Search engines like Google get around this on mobile by offering autofill suggestions, but their suggestions come from billions of queries. For Facebook, since their search is based on hundreds of individual values like “fencing” or “college friends” specific to each user and not a group, autofill suggestions will often not be useful, or worse, will require a lot of tapping and swiping to drill down to the full request.
What’s more is that Graph Search queries are designed to be written out naturally in full-form sentences with verbs, pronouns, etc., which is 0something that keyword search engines like Google do not need. If you’re looking for sushi places to eat on Google, it’s a five-character search for the keyword “sushi.” With Graph Search, Facebook wants to show you sushi results refined by a group of your friends, so the same search would require writing out “sushi restaurants my friends have been to” or “sushi restaurants my friends like.” That’s a lot more typing.
It’s clear that on mobile, Graph Search would need to be powered by speech to make it most effective. No one will want to type out such long sentences. Not to mention, with services like Google Now and Siri, people will come to expect control through speech.
Supporting speech is a different problem altogether than what they’ve solved so far and they’ll have to do a lot more work until it’s available on any major mobile platform. Here are four reasons why.
Speech Recognition Doesn’t Come Cheap
If time is money, then speech recognition is very expensive. It’s well-known that it requires a considerable amount of investment to develop and no one knows this better than Apple and Google.
Apple chose to not make their own speech recognition but rather license Nuance’s technology for Siri. Nuance has spent over 20 years perfecting their speech recognition; it’s not an easy task and they’ve had to acquire a number of companies along the way.
Google, on the other hand, chose to develop their own speech recognition and needed to build a clever system to collect data to catch up to Nuance. The system, called Google 411, set up a phone number where people could call in from landlines and feature phones to ask for local results. Once they got the data they needed, they shut down the service and used it to build their recognition system. It’s taken a company like Google, who masters search, over three years to come to where they are now with their speech recognition.
Even if it takes Facebook half as long to come up with a similarly clever solution, they’ll need to start soon for it to be released any time in the next year.
Names Are Facebook’s Strength And Speech Recognition’s Weakness
One of Facebook’s early successes has been names. The company’s algorithms to return the most relevant person when making a search for a friend played a key role in its early success. People are accustomed to saying “add me on Facebook” without the need to specify a username or handle, an advantage that makes their entry into speech that much harder.
Names are speech recognition’s biggest challenge. Speech recognition relies on having a dictionary or list of expected words that are pairedto sample voice data given to the system. That’s why most engines do really well when recognizing common English words but have such a hard time with out-of-the-norm names and varying pronunciations. Facebook has hundreds of thousands of names to deal with and it’s a key part of their experience, so they’ll need to master the domain for it to be useful for their users. Now, one could argue that having access to all these names may give Facebook the edge to solving this problem, but they’ll need to work on a solution for some time for it to become anywhere near acceptable.
Supporting Natural Language Isn’t Easy
The final piece of the puzzle may be the most difficult: supporting natural language is really, really hard. Working at natural language processing company Maluuba, I can attest to just how hard a problem this is to solve. Natural language processing is the ability to understand and extract meaning from naturally formed sentences.
This also includes pairing sentences that have the same meaning but are said differently. For example, with Graph search, I can type “friends that like sushi” and it shows a list of my friends who have identified sushi as an interest, but if I type “friends that like eating sushi” it looks for the interest “eating sushi” — which none of my friends have listed — and it returns zero results. In reality, both sentences mean the same thing but are worded differently. Understanding natural language involves understanding the real intent behind a request, not just its literal intent.
On a desktop browser, they may be able to get users to learn how to search in specific sentence templates, especially with the help of autofill suggestions. But for speech it’s nearly impossible. People ask for things differently almost every time; even the same person can ask for the same request in a different fashion when speaking. Ask 10 of your friends how they would search for nearby sushi restaurants. I have no doubt most, if not all, responses will be different from one another.
Now, they could fix the sushi example I gave earlier but that may cause false positives with other aspects of the system. Understanding natural language requires large data sets and complex machine learning to get right, something that Facebook’s Graph Search team may be investigating but will not be able to master any time soon. It’s just not a simple problem to solve. That’s why Apple jumped into a bidding war to buy Siri, which at its core is a natural language processor. To put into perspective how difficult it was, Siri spun out of the DARPA project that took over five years to build with over 300 top researchers from the best universities in the country.
Languages, Languages, Languages
Facebook has over a billion users who collectively speak hundreds of different languages. Facebook has said they’re beginning their launch with English. How long until all billion users’ languages are supported for the desktop? And since speech is significantly harder, how long until those users are supported on mobile? It’s one thing to support hundreds of languages through text and a much harder thing to support it through speech. This will be the problem they face for the next decade.
Facebook acknowledges that their future lies in mobile. Mobile begs for Graph Search to be powered by speech, something that Facebook simply cannot do yet. I have no doubt they will but it most definitely won’t be to any acceptable quality anytime soon. They’ve taken the first step but they have a long journey ahead of them.
I’d been waiting for Forbes to bring out a Newsstand app and the wait has been worth it.
Loving the clipping feature as you can probably tell.
This particular article is great too - give it a read in the current issue.
Clipped from Forbes #clippings
Apple makes some really great software and hardware. We love it. But sometimes there are certain little things you want out of your computer that Apple can’t or won’t provide. That’s why we have jailbreaking and modding.
We love it w…
Ah excellent news! Just the component I wanted for my project! I wonder if you can add more than one? I want to add a couple at least…
The budget board makers over at the Raspberry Pi Foundation are clearly having a busy week, first launching the Model A in Europe, and now reporting that development of the camera add-on for the miniature computers has been completed. Well, the hardware has been finalized, at least, although it hasn’t been “tuned” quite yet (picture quality still needs improvement), and the drivers aren’t fully ready. The camera PCB measures around 25 x 20 x 9mm, and hosts a 5-megapixel, fixed-focus sensor that can shoot 2592 x 1944 stills and 1080p video at 30 fps. Aligning with the low cost of the main boards, it’ll set you back $25, but won’t be available for “at least a month.” Don’t just sit there twiddling your thumbs, though. Start brainstorming all the cool projects you can work on once you put an eye on that Pi.
Source: Raspberry Pi
Advertising is a wonderful challenge for second-screen: promotional mechanics in all their forms.