Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Late-Stage Adversarial Interoperability (eff.org)
88 points by panarky on Dec 8, 2019 | hide | past | favorite | 34 comments


> But stories like Mint are rare today, thanks to a sustained, successful campaign by the companies that owe their own existence to adversarial interoperability to shut it down, lest someone do unto them as they had done unto the others.

This invites the question of whether there are any consumer-data-friendly financial services out there.

I'm a fan of both the FSF and FireFox, as both seem relatively less rapacious than the alternatives.

This was a great article.

Aside: https://eff.org/join is confusing. Can I join if I buy something?


The EU has a requirement for companies to share financial data about consumers if the consumer requests it. It's designed to support services like Mint.

https://www.cnbc.com/2017/12/25/psd2-europes-banks-brace-for...


And because this is a rule about something the banks have to enable, it gets to build in the security design you'd actually want as well. Since your bank has to help make this possible, it knows this is "Useful Spend Monitor Inc." calling the API on behalf of greglindahl and thus is allowed to see greglindahl's balance and recent transaction list but isn't allowed to create a new transfer authority for Mr A. Nigerian-Fraudster when they "somehow" get the credentials from Useful Spend Monitor Inc.

That's never going to happen for services built out of scraping Internet Explorer sessions or similar shenanigans.


The downside of such adversarial interoperability is that it reduces security, because the scrapping systems need the user's credentials to login. I'm not going to give any third party my banking credentials, and I have 2FA turned on every site as well.

I suppose if the scrapping is done by the user's browser via say, a timed service worker or something, and the credentials never leave the user's browser, it would be ok, but the idea of a bank of headless (IE!) browsers with a database of millions of users bank credentials sends a shudder down my spine.


Similarly, recaptcha prevents this as well. Recaptcha primarily checks that it's running on a real browser on the correct website using botguard. Then it might give you the CAPTCHA challenge to prove you are human. It's not possible to solve the CAPTCHA challenge without first passing botguard.

A good example of this is discord. Alternative clients aren't very popular since you need to pass recaptcha to login. The currently available clients require you to copy cookies out of your browser.


The Discord plugin for Pidgin doesn't require copying cookies:

https://github.com/EionRobb/purple-discord/


Scraping should be done client side. Like an automated browser. That way credentials never have to leave the user's computer or be shared with third parties.


That's still dangerous and a retrograde step security-wise. There should be no third party code that is running on your internet banking session, including browser extensions.


The response to Cambridge Analytica set out a clear social contract: to allow users to share their data with a 3rd party is to take responsibility for what the 3rd party does with it. Users cannot be trusted to read consent screens or make their own decisions about authorization. And even if they can, they don't have the consent of others involved in the data (whether friends on social media, or counterparties to bank transactions).

Companies taking steps to block interoperability is an outcome of the righteous campaign that this community fought and won. Comments here should be celebrating.


But was Cambridge Analytica really adversarial? If they used the data without the knowledge or specific consent of FB, the latter should not be blamed in principle. (And that was no the case) Although being pretty against FB's practices and business model, I have to admit that the mainstream backlash against it have been largely "politically" motivated (as in, more partisanship than ideology). So likely FB would be bashed for this specific reason anyway.

I'm open to the idea that there should be a way to interoperate without the consent or responsibility of the primary operator and no way of preventing it by them. Though I don't know about the oughts, but nowadays the desire to integrate and annotate your finance seems quaint in the light of possible data weaponization and profiling. (And institutions are trying to do it whether you like it or not, my bank does try to sort my transactions "intelligently", no thanks). I'd say I would want to do it at home on paper if at all, but people probably[0] should be able to do these thing and others if they want.

[0] Unless there is a legitimate case to be made that they endanger freedom of others.


Facebook published an API for third party apps to access data with user consent, and Cambridge Analytica used it. The outrage indicates that a belief that there should not have been such an API. Or at least that users were not competent to consent and that Facebook should have been controlling which 3rd parties they could share with.


We care about interoperability with our software. We should be able to download the data and process it ourselves on our computers without having to trust random companies.

Needless to say, they hate it when we do this. It's totally fine when they run javascript on our computers to collect personal information but when we run our scripts against their site suddenly it's abuse.


The lesson of CA is that if you can download the data then you can share it with a third party, and when the third party abuses it, everyone is going to hold the guy with the Download button responsible for that.


Facebook was selling other people's data. This is about accessing your own data. Both are about about having control over your data. There is no contradiction.


There was no sale. Cambridge Analytica used the free, public API just like any other FB Platform developer, including 15-year-old me. Users clicked through OAuth consent screens to share their data with its personality test app. It being a social network, much of that data involved their friends.

Bank transactions are not "your own data" any more than what CA scraped from personality quiz users was "their own data." In both cases, there are people on the other side of each record.


There's definitely a quantative difference between the amount of other individuals' personal data in your bank records vs in your social network account.


"Adversarial interoperability" is what happens when you decide to abandon consent as a governing principle for interoperating, because it prevents progress. People complain when unicorns do this.

Even if you had consent in principle, you'd still have to agree to terms for interoperating. In particular, what price and who pays? Or does neither side pay? Consider ISP's and peering agreements.

Even some open source projects will grumble if Amazon uses their software following the terms that are right there in the license agreement. There seems to be an implicit "shadow agreement" that if you use their software to make money, you should give something back. (But that's not what the license says.)


> "Adversarial interoperability" is what happens when you decide to abandon consent as a governing principle for interoperating, because it prevents progress.

Whose consent is needed?

Open source projects who grumble about amazon created a model where their consent was not required. This is where the ideals of open source don't meed the desires of the VCs paying for the work. It's possible to have a stable business supporting and serving open source but it is hard to do something to meed VC returns on investment with open source.

The problem is in the financial model and whose consent is required in that.

When it comes to finances things are a little different because the business model is different and so are the regulations. It's also not about open source but about data control.

There is the whole idea of interoperating, too. The financial organizations often do not see a benefit to their income by interoperating with their competitors.

It might be useful to look at how competitors can come together to collaborate in open source in a way that helps raise all those involved.


Well, typically you need consent from the service being used. If you look at how the "robots.txt" standard works, consent is assumed even for basic web crawling by search engines. (At least for legitimate businesses; many crawlers do ignore robots.txt.)


That document should not be considered legally binding. It is not a contract.

Robots.txt is not even really a standard, best it was is a draft, set to expire on January 2nd next year.

The purpose of the file was to preserve performance, not to hide anything. It targets crawlers not API calls.

Consent of a non-single person business entity takes months at the very least, if it ever happens. It's easier for them to deny (incl. lawyers) and intimidate than compete and handle support tickets. Welcome to capitalism.


In practice, the crawlers that read robots.txt don't do adversarial interoperability. If a website doesn't want to be in Google's index, they can opt out, and no lawyers are needed to do that.


I don't understand why there aren't popular client-side tools that do this sort of thing. Not only is that hugely better for security (no third party sees your credentials or your data), it's much harder to distinguish from automated centralized scraping.

Mint apparently was able to get some users to install browser extensions. I believe RECAP and Sci-Hub work this way too. So why do trusted-third-party scrapers remain popular?


Users hate installing software and investors are used to immediate up to the minute metrics.


I think hating installing software is actually pretty prudent these days. Look at how mobile app publishers act towards personal data, now think about the fact that desktop computer software has almost no restriction on what it can do with a user's files. We're basically stuck just hoping that any app we install is respecting our privacy.


You could structure it as a browser extension - it'd have the ability to run on certain sites (or sites you click the icon on, or whatever) but not all sites and definitely not all your local data.

But also, I was thinking mobile apps actually make sense here. They are actually isolated between each other. Some publishers are not be respectful of data you explicitly share with them, but you still have to explicitly share it. I'm a lot more comfortable giving my bank password to a mobile app (preferably an open-source one) that runs locally than to a website or mobile app that sends my credentials remotely.


The problem is the alternative is giving your bank credentials and other sensitive personal information directly to a third party.

At least when the software is on your computer it's possible to analyze it to see what it's doing with your information. Once it leaves your computer into the possession of a third party, it's no longer even within your power to observe what they're doing with it.


I'm not sure I agree. On the one hand, once you provide data to a web service, that piece of data is permanently out of your control. On the other hand, once you're running untrusted code on your computer, all of your data is potentially permanently out of your control if you ever reconnect to the internet.

As for analyzing software, even for software devs that's impractical. It's not enough to run it under some kind of process spy to see what it's doing because the malicious behaviour might be time limited or otherwise triggered unpredictably, so you really need to decompile and reverse engineer the program to figure out what files it can read and what it does with them. Not gonna happen. A better option is to spin up a fresh VM to run untrusted code on, but even that's cumbersome.


Decompiling the program is arduous but it is possible. You only need one person to do it and expose the nefarious activity to destroy their reputation, which acts as a deterrent.

Another option is to restrict its network activity. If it can't communicate with the developer then it can't send your information to them.

You could also merely log its network activity, maybe using some hooks into any crypto library it links against so you can see the plaintext. That wouldn't prevent it from doing something bad, but it would at least allow you to detect it after the fact and then everyone would know to stop trusting that developer, and you get the deterrent without investing as much time and effort.


The only example of such a service I remember ever using is a German/European payment provider that offers immediate yet almost-free bank transfers.

That service cannot run in the browser because its specific purpose is to run the transaction themselves. Not trusting the user is basically the reason they exist.


It seems like waaay back in the day things like Quicken and MS Money could download updates automatically from your bank, or at least banks offered ofx-format downloads that those tools could import.


> Imagine if an adversarial interoperator were to enter the market today with a tool that auto-piloted its users through the big tax-prep companies' sites to get them to Free File tools that would actually work for them (as opposed to tricking them into expensive upgrades, often by letting them get all the way to the end of the process before revealing that something about the user's tax situation makes them ineligible for that specific Free File product).

Sounds like https://simpletax.ca


Canadian taxes were/are simplified, APIs exist to login and fetch T4s and other services, and while it could be simplified further (particularly for corporate taxes and provincial incorporation, PEPPOL E-Invoicing and open banking legislation, etc.), the current state of things in Canada is pretty straightforward, including a relatively open program so developers can write and validate their independent tax software with the CRA. By comparison, when similar simplifications were attempted in the US, they generally failed due to lobbying efforts from Intuit and others. And my understanding is that in Canada, provinces try to harmonize more with the CRA, with the exception of Quebec primarily, while in the US states create many different tax situations, in part because there are more U.S. states than Canadian provinces.


this is somewhat off topic... but can you imagine how much money mint's data is worth? I once read this story about a guy who worked in the fraud department for citibank(?) and he had access to all the transactions. He used that live data to see that purchases at Chipotle(?) were down X percent and shorted the stock before they released their quarterly earnings report. He got thrown in jail and I don't remember exactly why. He wasn't really entitled to that data.

However, Mint/Intuit certainly obtains that data legally. They could make trades on that information, couldn't they? That is essentially like printing money. Maybe they already do and it is just a secret? Idk.


papergov.com -> adversarial interop for government websites :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: