Data Leaks That Killed Google+ May Not Have Altogether

Original author: Russell Brandom
  • Transfer


For months, Google tried to disown the growing resentment of the technical public, but on October 8 this dam finally collapsed, buried under news of an error on the rarely used Google+ network, as a result of which personal information of half a million users could be made public. Google found and closed the vulnerability back in March, at about the same time that the unpleasant story of Cambridge Analytica gained momentum . However, with the advent of news losses increase. Google+ user version closes , privacy lawmakers in Germany and the USare already looking for opportunities to file lawsuits, and former employees of the US Securities and Exchange Commission openly argue about what Google did wrong.

By itself, the vulnerability seems relatively small. The essence of the problem was a specific API for developers with which it was possible to access non-public information. What is important, there is no evidence that someone used it to access personal data, and, given the dead user base, it is unknown how much of this personal data could be seen at all. Theoretically, anyone could get access to the API, but only 432 people requested it (I repeat, this is Google+), so it can be assumed that none of them even thought of this.

A much bigger problem for Google was not a crime, but an attempt to conceal it. The vulnerability was eliminated in March, but the company did not disclose this information for another seven months, until the discussion fell into the hands of The Wall Street Journalthis error. The company, apparently, realized that she was nosyachila - why else do you erase the social network from the face of the earth? - but about what went wrong, and when everything is very confusing, and this situation reveals deeper problems related to how the technomir deals with such jambs related to privacy.

Part of the displeasure comes from the fact that, from a legal point of view, Google is clean. There are many laws about the need to report vulnerabilities - mainly GDPR, but there are also different laws at the country level — however, by their standards, what happened to Google+ cannot, strictly speaking, be called vulnerability. The laws speak of unauthorized access to user information, describing a simple idea: if someone steals your credit card or phone, you have the right to know about it. But Google only found that this data could be available to developers, and not that the data actually leaked somewhere. And without obvious traces of theft, the company is not obliged by law to report this. From the point of view of lawyers, this was not a vulnerability, and it was enough just to quietly solve this problem.

There are arguments that oppose the disclosure of such errors, although, judging by the hindsight, they are not so convincing. All systems have vulnerabilities, so the only good strategy in terms of security will be to constantly search for and fix them. As a result, the safest software will be the one in which the most errors were revealed and patched, even if it would seem counterintuitive to an outsider. It will be wrong to force companies to report every mistake - it turns out that the products that most care about users will suffer the most penalties.

Of course, at Google itself, for years, they were engaged in the sudden exposure of the mistakes of other companies within the Project Zero project - in particular, that is why critics cannot wait to lash out at the company's obvious hypocrisy. However, the Project Zero team will tell you that reporting about third parties is a completely different calico, and such disclosure usually should be encouraged to correct errors and enhance the reputation of noble hackers who hunt bugs.

Such logic is more suitable for software bugs than for social networks and personal data issues, but in the cybersecurity world it is quite common, and it would not be an exaggeration to say that it influenced the thinking in Google when they decided to sweep the story under the carpet.

But after an unpleasant Facebook crash, it seems that arguments from the world of jurisprudence and cyber security are practically irrelevant. The agreement between techno companies and their users is fragile as ever, and such stories hurt it even more. The problem is not the leakage of information, but the leakage of trust. Something went wrong, but no one at Google said that. And besides the report from the WSJ, perhaps nothing would have been known about this. It is difficult to avoid an unpleasant rhetorical question: why don't they tell us anything else?

It is too early to judge whether Google will face negative in response to this incident. A small number of victims and the relative irrelevance of Google+ suggest that it is unlikely. But even if this vulnerability was not critical, such problems pose a real threat to users and companies that they trust. The misunderstandings of how to call it - a mistake, a leak, a vulnerability - are superimposed on the fact that it is even less clear what companies must do for their users, when vulnerability in privacy turns out to be significant, and how much control we have over our data. These questions are critical in our technological era, and if the last few days have taught us something, it is that the industry is still trying to find answers to these questions.

Also popular now: