Tags Posts tagged with "reputation"

reputation

by -
rating reputation platform economy

Online valuation systems are overrated. Most systems are flawed and become less and less important as trust in the brands of platforms like Airbnb and Uber grows.

At the time, in 2007, when Brian Chesky and Joe Gebbia came up with the idea of creating an online platform on which anyone could rent a room to a completely unknown tourist, they drew many odd looks. How could you know an overnight guest can be trusted? In the mean time, over 100 million nights have been booked through the platform in 191 countries. I’m talking about Airbnb. The secret: online trust through valuation systems.

Sharing rooms and houses has been existent since time began, but thanks to Airbnb the threshold has been lowered and the activity grows exponentially. There is a growing number of online platforms that link up individuals using the same principles. Uber links up drivers with people who want to travel from A to B and the Dutch SnappCar links up car owners to people who need a car for just a short time. In the Netherlands, already over 140 platforms actively mediate between individuals. Every platform’s promise: Carefree pleasure. Trust is their key.

Trust through online platforms is created in two different ways:

  1. Interpersonal Trust: The trust among users. This is created by personal profiles filled with information about yourself and through reviews of others about you. When you rent a room through Airbnb, you will evaluate the host and he evaluates you afterwards. The more transactions, the better the view on the trustworthiness of users;
  2. Institutional Trust: The trust users have in the system. Platform builders do all they can to ensure problems are avoided and solved as soon as possible. Preventive measures are, for example, profile scans and automatic processes like credit checks to keep people with wrong intentions out. Reactive measures are a valid insurance or a well-reachable customer service. These together create trust in the brand of the platform.

It has to do with both trust in the platform, as well as trust in other users. Valuation systems (e.g. reviews, stars) serve several purposes: they cause bad apples to be quickly removed from the system, they ensure that people will be chosen according to their proven quality, and they enforce certain desired behavior. At the time you misbehave (or you do not behave as per the applicable or desired standard), you’ll be slowly excommunicated and you’ll end up at the sideline.

Online evaluation in practice

How does online evaluation practically work? Most evaluations are based on a 5 star system. Besides this, most platforms offer a possibility to add a certain explanation to your evaluation in a text box.

When you look at the average number of stars in Uber, it appears that the differences between good, medium, and bad aren’t that big at all:

The impact of only one low grade can be big in case you don’t have many evaluations yet. And one low grade lowers the chance that someone else will choose you the next time. Is this current system the ideal?

Let’s take a ride in an Uber taxi as an example. That particular day, you woke up at the wrong side of the bed, the taxi ends up in heavy traffic and you miss out on an important business meeting. Chances are that you will not give this driver a great evaluation. In this example the context is independent of the evaluation.

Another example: A carpenter offers his service on Werkspot.nl. In his first year, as a beginning carpenter, he does not have the same experience as an old hand in the trade. Over the years he develops himself as an expert, though negative reviews of his first year still weigh in his reputation forever. Such a system doesn’t account for any learning curve.

Reputation and valuation 2.0

A party that, in my opinion, has thought things through really well, is the Dutch Meeting Review: a platform to evaluate event venues. They improved online evaluation in four different ways:

  1. Linear depreciation of reviews: a review is depreciated over a time of 4 years. After 1 year it’s weighed by a factor of only 75%, after 2 years 50%, 3 years 25% and the ratio becomes 0% after 4 years. This ensures that your mistake won’t be counted against you forever and your most recent performances contribute the most to your overall evaluation;
  2. Possibility to follow up on a negative review: In the Uber-example, you may imagine that a client calms down and realizes that he has given a too low valuation. At MeetingReview you are allowed to change you evaluation in hindsight.
  3. The feedback loop: As receiver you are allowed to enter into a conversation with your feedback supplier to find a reasonable solution. So, not only a future client will profit, but also the current user of the system. This feedback loop is based on the good intentions of the evaluated person and the insight that he wants to learn from his mistakes;
  4. Manual checks: With a scoring system ranging from 1 to 10, all deviating scores below 5 and higher than 8 are being checked manually. This prevents good friends or competition to affect the average score unjustly.

Such a smart valuation system is an exception at the moment. Most platforms still work with the simplest ratings. Though choices that are made, based on this output, may be drastic.

Added value of interpersonal trust in the future

First of all, more and more often algorithms make the first selection of the supply. With Uber, the algorithm makes a match between you and the taxi. Only afterwards you’re able to see the valuation of the driver. Due to the fact that this car is your fastest option, you’re not likely to decide to cancel this ride. Also on other platforms we see automated linking appear. In Airbnb you’re already able to rent many houses by ‘direct booking’, i.e. without the explicit confirmation of the landlord.

What we see happening is a movement from interpersonal to institutional trust. Ultimately, you trust the platform to have all their scans and checks up and running. And, moreover, that the platform will fix things in case it would go wrong. The role of valuation systems will be moved to the background more and more, and will end up as no more than a control mechanism.

Conclusion

Interpersonal trust played a big role when the institutional trust wasn’t yet developed. In 2007, nobody had heard about Airbnb, there was no trust in the brand, and besides people were asked to do something they had never done before. The interpersonal trust through online profiles played an important, though temporary, role. The ofter we use these kind of platforms, the greater the trust in the institutions behind these sites.

Be honest: What was the last time you asked your favorite airline for the online valuation of the pilot of your airplane? Exactly.

This post was originally posted on the website of Intrapreneur.nl, a knowledge platform of Trivento.

——————————-

Question @RenseC on Twitter: “Doesn’t the fact that eBay reputations are still influential contradict with your theory?”

Answer:

Yes and no. I think that a certain balance between interpersonal and institutional trust will always remain. The question is about the proportion of each of these, and is determined by several factors:

1. The trust and reputation the platforms has developed. Included are elements as trust in the brand, position compared to competition, how screening of users works, if quality checks are in place, what measures are taken when things go wrong, and which securities (e.g. insurances) the platform offers. For example, in case you know that Airbnb excludes every one with a score under 90% from their platform and profiles are filled well, and these elements are evaluated and checked, the necessity to look beyond to the reviews is smaller;

2. The core of the product or the service. Is the product or service standardized? Take a taxi ride through Uber: there are two certain quality standards that provide security (car type, certificate of good conduct, etc.). Besides, the desired behavior (response time, accepted score, etc.) is smartly directed through de app. Therefore it doesn’t matter if driver A or driver B provides you the ride. You want to go from A to B and that’s your reason for choosing the (brand) promises of the platform. Interesting note on the side: maybe the reputation of the demand side will be more important in the future: I’ve seen several Uber forum discussions about the question if you should pick up a client with a score of 4.3, cause it is guaranteed trouble;

3. The transaction taking place with physical contact or not; With a physical meeting (especially outside the personal living space) you’ve got more input on trust and the other party has a harder time not keeping his promises;

4. If the transaction is about a regular thing (taxi ride) or a once in a lifetime transaction (purchase of some product on eBay);

5. If an algorithm provides the match. In other words: are you able to make a choice based on the full supply, or does a algorithm provide you a match and does the profile only serve you to possibly refuse the choice made for you;

6. Choice: if there is (like traveling with BlaBlaCar from Utrecht to Brussels about noon tomorrow) not much to choose from, you’ll be less picky;

7. Urgency: do you need something fast, or may it wait for a bit;

8. What are you about to lose, when something goes wrong?

9. Probably, I’ll think of some more reasons later on 😉

Conclusion: I think the proportion of interpersonal <> institutional differs per platform and per service/product. I’m of the opinion that a strong institutional trust, trust in / familiarity with the service, small uncertainty, automated or preselected match and the urgency of the matter influence the impact and lower the requested the interpersonal trust .

To concretely answer Rense’s question regarding eBay: At eBay you may trust the institution in the way they handle their processes, but there are many uncertainties –e.g. not being able to check the quality of the product or not know the product to well yourself. With anUber ride things are organized much tidier, for their focus is on only 1 thing: a car driving from A to B while providing a good user experience. Actually, I’m curious if there is a difference in results of people that have picked up their products personally or when it had been shipped.

by -

What importance might be assigned to online reputation? More and more people make use of online market spaces in order to trade with strangers. You may rent out their house on Airbnb, join a private driver through Uber, or invest in a cool gadget in a Kickstarter campaign. It’s really exciting, for anyone and everyone is basically anonymous on the internet. So, you’re in the blue if you can trust the person in advance. Over time, these market spaces have developed countless ways to generate trust.

Control Systems

Systems will ask for ID-checks, Facebook connects and connections with other trusted sources. This is the only beginning. After every transaction, demand and supply side are requested to evaluate each other. How did you enjoy your stay in this home? How clean was the guest? Most often indicated by a short descriptive review or a star-based rating.

Seemingly, these platforms deem these evaluations and checks not yet sufficient. Uber, in this context, announced recently to start using gps-data of driver’s cell phones in order to monitor the driving skills and to intervene if needed. Kroodle, an initiative of Dutch insurer Aegon, launched a very similar product. The calmer you drive, the higher the reduction on your car insurance at the end of the month. This is the first tug pulling the rug from under the collectivity principle on which insurances are based. So far it is only a reduction, but we can figure it is just the beginning.

Further and Beyond

Reputation and valuation systems will be used more extensively. They are applied in a wider scope and databases –data originating from apps, but also connection to bank account, profiles and friend’s behavior– are being connected. The more information one gathers, the clearer the user profile. With such a personal profile, bad behavior can be punished with exclusion and good behavior awarded with greater access and privileges. Transparency in conduct fails, desired behavior is stimulated, being left out once is just a bummer. And all this is becoming even easier with automated algorithms.

Systems in use nowadays are still in an early phase. With scores of 4 out of 5 stars you could think to be heading in the right direction. On Uber, however, having less than 4.6 stars already puts you in the penalty box. Even though the driver has no complete control over the customers experience. Bad weather, traffic jams, accidents, or problems with the app do have a direct influence on the final grade. Besides this, the frugal Dutch will probably give a lower average than the jovial Americans. In case a customer regrets the low rating he gave the next day, there is no way to correct oneself.

Insanity

The better and completer the online reputation, the greater the chances for an Uber driver to find work. This reputation is composed of evaluations of small assignments. An Uber driver will be evaluated about twenty times a day. Off course everyone will attempt a maximized customer experience, but ask yourself what effect it will have on the person itself. The constant push to please somebody, avoiding every little mistake, and adapting your behavior to the standard Uber tries to establish, leaves no space to experiment. Deviant behavior is most probable to be punished, resulting in exclusion. I don’t know about you, but such would drive me totally insane.

Who determines what is ‘right’?

Earlier on I indicated that these systems aren’t perfect. Not only the techniques, but also the interpretation of what is ‘good’ and what is ‘bad’ is on the line. This in turn is situationally linked. Whenever I’m taking an Uber ride in company of my kids, I like the driver to drive calmly, yet when I’m in a rush to catch my flight, I’d be pleased with a racy driving style. In both cases the gps-algorithms are likely to punish the driver –as he would be either far too slow, or far too speedy. Though, I would be a super content customer. In hiring a cleaning lady through a mediator platform like Helping the same dilemmas are faced. What makes a cleaning lady good? One greatly values punctuality, another cleaning abilities, and a third a friendly chat.

How important is trust?

Platforms claim that the creation and monitoring of trust is one of their main added values. The question arises: Isn’t the importance of online reputation too exaggerated. Ratings are based on rational data, although the human being can’t make rational decisions. I buy a house for more than the asking price, mainly based on gut feeling, but leave my most precious possession –my kids– at home with a relatively unknown, when going out for diner with my girlfriend at night. The platforms make us believe that online trust is very important, see this short fragment.

They have an incentive as well: the fact that you won’t trust a stranger right away –distrust being the standard– has a huge impact on their right of existence. What would happen if we would switch the standard to ‘trust’?

It is important to ask ourselves this question. As mentioned before, reputation systems will be used for so many more applications. These may be good, but also be bad or with disputable objectives.

Sesame Credit

Reputation data helps entrepreneurs without credit history in less developed countries to obtain loans. But things may turn. Considering the possibilities, one would end up in the scenarios described in the book The Circle, by Dave Eggers. Reality has caught up with the writer’s fantasy, so it seems. Late December last year, it was announced that China is developing a national reputation score, Sesame Credit. Drenched in clever gamification techniques, the score is still optional, yet will be mandatory for all it’s population from 2020. The Independent unraveled the system, stating “The system measures how obediently citizens follow the party line, pulling data from social networks and online purchase histories.” (China has made obedience to the state a game) Bringing a scenario depicted in The Circle within reach.

Designing the future

Consequently, the time has come to consider future reputation systems. Anyone who owns reputation data is sitting on a goldmine, and even owns the solution to glue platforms together. The interpretation of data will hence never be fully objective, but always in advantage of the (revenue) model of the platform.

Everyone has to learn from their own mistakes. The current systems do not allow for this, so we will have to think about creating a system that takes into account the human development on the long run. A system with room for mistakes, where you do have the right to experiment, and where shortcomings can be forgotten. For otherwise we’ll end up in a situation in which deviant behavior, which has been a source of new inventions through the ages and brought growth to the human existence, will be hampered.

 

Social Media