Broader context: iView (from the ABC in Australia, a publicly funded broadcaster) was pretty much first to market here for streaming TV, and view on demand.
The other stations have all since caught up, but ABC have a tremendous amount of quality children's content so it's a very popular service with families.
However, the current government is not a fan of funding the ABC and as such they've been operating with a very tight and reducing budget for most of the last decade. The edges of their products including interactive news pieces and election/pandemic coverage (and some of these API key issues) are a bit rough but the overall achievement is excellent.
Great summary. One last recent detail relevant to HN is that despite being a public broadcaster, they've recently started forcing everyone to make an account to watch things, in order to track viewers more closely & get better data. The BBC also does this, I think, not sure when they started. It's a good question -- should you have a right to view content from the public broadcaster without making an account?
I'd be careful about posting stuff like this as a young person in Australia. The modern situation is incredibly hostile towards this sort of disclosure. Especially regarding a government entity.
It's not that you've done anything in the slightest bit wrong. It's that others with power can easily make it
become wrong with little to no backlash in the current Australian climate.
I understand the desire for recognition, but certainly think twice and at the very least wait until after election season is over in June so tech-illiterate political opportunists don't pounce to martyr you for their own gain.
Would suggest looking into the fall of someone widely recognised like Dr Vanessa Teague, a professor who pointed out government failures in e-voting and claimed health anonymization measures to make up your own mind. One government department finally had enough and she was out, I'm sure it's a lot cheaper than actually fixing the problems raised.
Behind the laid back "beers and beaches" mirage, Australia is an authoritarian country with huge public support for iron fists, this is clear to many.
I booked a hotel stay (in Canada, not Australia) and got an error page at some point that dumped out all env vars including database credentials.
Tried my best to report (not publicly disclose) it, including asking the front desk for contact information for IT; no response.
I think we're (on HN) often in quite a bubble of being (or striving to be) hot on this sort of thing, or frankly far trickier to exploit sorts of things, when really the bar for a lot of ('IT is a cost centre') stuff out there is extremely low.
I don't think this sort of leak or vulnerability is anywhere near as rare (which isn't even that rare) as it seems - I think an awful lot must just get quietly exploited or go unnoticed. We're only hearing about this one because someone thought it was 'lolz', I didn't publicize the one I noticed (in my normal user behaviour of just trying to book a room!) nor did I see if I could connect to the database and book myself in for free or something. And I only noticed it because it a) experienced an error; b) dumped env vars in the event of an error - i.e. I didn't have to look for it. How many other sites have I used since with similar problems but which just didn't happen to serve it up on a silver platter for me?
Leaks are everywhere. I went to a certain country and needed to register my phone, somehow ended up in a workflow that allowed me to enter any national registration number (similar to a social security number) and it would output the person's name, phone number, address and other details for me to confirm that that was me :)
No rate limiting on the endpoint, doesn't require auth, didn't block my VPN, doesn't even set cookies (very privacy conscious devs apparently). I could have mined the entire country's data. Insane.
Hardly matters. If an endpoint is leaking anything as sensitive as national ID + name + address, a determined attacker will have no problem with scraping it slowly or using a network of proxies to avoid rate limits.
My favorite is just having the console open while visiting the web. It is amazing the amount of information devs "forget" to remove from sending to the console in production. A lot of console vomit is from JS frameworks. I don't know if there's a switch that can tell them to shut up in production or not, but it's one thing I look out for on anything I work on.
Most C-level people couldn't care less about security. I'm yet to work for a SaaS that implements 2FA, yet at some point all of them have had passwords that a script kiddie could brute force within an hour. The only time I've seen security become a top-level priority was when some customer demanded some kind of checkbox compliance like SOC / ISO27001.
You call it checkbox compliance, but it does at least make vendors think about pretty much the bare minimum in all-around security posture, and then have an external audit - as someone who has to approve SaaS vendors it makes my life much easier.
Thankfully, I already disclosed the issue to iView's engineer team back in December 2021, and a lot of the original data has since been removed from the site. I do try to take care of this issue by censoring a lot of information about the security issue in the write-up, but I'm not sure if that is enough.
But why is the Australian government so "police state" minded?
Is that really what the Australian people want? I'd guess they just don't care either way, but in that case why would the Australian politicians push for that? Canada has a pretty similar apathy towards politics but even then we don't see the government forcing Canadian citizens to implement backdoors or raiding the offices of a broadcaster. (Yes the recent C-36 and C-10 bills are pretty disastrous in that regard but they are very recent and face quite a bit of opposition)
It's such a peaceful country too, so the entire "hard on crime" policy does not make sense to me. Is it a partisan issue in Australia or is it something both parties agree on?
In simple terms, Australia is a relatively young country that formed its own government in 1901. It was also isolated from the rest of the world and has a harsh environment with a lot of things that can kill you. This produced an overall culture of helping each other when you can (what gets called “mateship”), and trust in the government to help when it is needed. Australians generally like an orderly society, that translates to a publically-approved police state.
Slight nitpick: Australian colonial (State) governments existed well before the Constitution of the Commonwealth in 1901, at least since 1788.
I'm not so sure about the relevance of the so-called "harsh environment" to political culture. Although, it may be said Australians have a more deferential view of certain aspects of politics than other Western countries.
I’m well aware of the predecessor colonial governments pre-Federation. My point is that those were symbols and apparatus of British imposed rule, and it is relatively recent that the notion of an “Australian government”, as its own national sovereignty, came about. Specifically since the change in national identity had a direct impact on the government no longer being looked at as “the ruling class”.
Don't forget our geography either! We're in fairly close proximity, physically and politically, with authoritarian Asian states like Singapore, Malaysia and China.
> Is it a partisan issue in Australia or is it something both parties agree on?
They've been boiling the frog since 9/11. All anybody has to say is "national security" and the opposition party pretty much folds. Imagine the US without the Bill of Rights - that's the landscape.
Also in the US, where you can be sentenced to 41 months in prison for browsing a public URL at AT&T, and where the the Governor of Missouri wants to make it illegal to view the html source of a web page (because some state web site leaked all the SSNs of their teachers in some hidden html or something).
If I found something like this on a site I don't think I would notify anyone. Too risky. Maybe over TOR if they have a contact page or something. But it is hard to be anonymous these days.
weev didn’t just “browse a public url at AT&T”. That is dishonestly reductionist. He noticed the bug and then used it to retrieve and make public the private data of over a hundred thousand people.
My understanding was they just sent random icc-id codes as a query parameter to a public url, and if you hit a valid id it returned an email address. It was working as "designed", and not a bug.
So? You access a few accounts that way, realize you're accessing people's private info, and then stop. No one is gonna charge you for that. Write a script and download a hundred thousand people's private data, that's a whole different thing.
Quite a few countries have laws from the 1980s that basically say "gaining unauthorised access to computer systems is a crime"
Which is of course a very expansive definition. Think you've found a leaked database credential and you test it before reporting, so as not to create a false alarm? That's illegal hacking. Almost any persistent XSS? That's illegal hacking. Access an admin panel by entering a default password? You guessed it, illegal hacking.
We might get the impression these laws don't exist, because they aren't enforced internationally or if the hacker can't be identified - so black-hat hacking, cryptolockers, tech support scams, giant data breaches and suchlike go completely unpunished. But a white-hat hacker who identifies themselves in hopes of getting their security report taken seriously might well get a visit from the cops.
In Australia the goto for dropping a legal hammer on a digital crime is "misuse of a carriage service" which is just a big lasso that puts crimes like fraud that happen on the internet into a simple basket so they can attach sentences as they see fit.
> If you randomly try my front door and find that it's unlocked, don't expect me to be thanking you.
Why? If someone tries my front door, doesn't go in but confirms that it is unlocked by opening it by an inch (=verifies the DB credentials but doesn't run any queries) without really peering into my private spaces, then privately reaches out with "hey, hey, your door is not locked - I haven't went in but I know it's unlocked, you may wanna look into this" then I imagine while that could be odd situation (e.g. depending on whenever one has a lawn), I would be grateful and not in the least bit offended.
Surely, I wouldn't be happy if I'd get an alarm that my door is suddenly open (IDS alert) and would react accordingly. But if my door is not locked and I'm not aware and someone responsibly discloses this - I don't see how that'd be an issue.
Sure, but that doesn’t mean that I’d be thanking you.
These arguments about computer crime law are always the same, and people with your view always shoot themselves in the foot with analogies like this. This is not a pre-existing social expectation. If someone comes to my front door, tells me that it’s unlocked, and tells me that they were trying peoples front doors for the intellectual thrill, there is a 0% chance that I’m an reacting positively. I challenge you to find any material proportion of well-adjusted non-nerds that don’t agree with me.
These analogies to the real world fall apart when you realize that cyberspace is filled with millions of people trying to "break into your house".. If you have an internet-connected service you need to expect people to attack it. Not so with a house.
Of course, you have every right to be upset that someone tried to do that to you. But it's clear they don't have bad intentions at least, because they let you know.
If somebody tried my door handle, I’m immediately going to assume malicious intent.
Start trying door handles in your neighbourhood and I can guarantee you’ll either be assaulted by an unhappy resident or arrested pretty quickly. It’s not acceptable behaviour however altruistic you believe it to be.
For others: The ad claims that the news outlet that published the fact that the state website had teacher SSNs was part of the "fake news media" exploiting privacy for political gains.
Apparently the SSNs were embedded in the pages html. The ad makes it sound like a huge reverse-engineering job.
The two situations aren’t really comparable. The ABC, or the minister responsible for selecting it’s CEO (it’s owned and funded by the gov as a corporate entity, but not run by it) would be laughed out of the room if they suggested that the author should be charged.
They’ve been reasonable by waiting four months from initial contact, but in vulnerability disclosures it’s polite to add a better timeline of events. There’s still some detail that hasn’t been fully resolved, but it’s not clear what the residual impact is.
This particular post doesn’t really seem to go into too much depth about what these keys are used for, or the damage that could be done, but I’m erring on the side of ‘meh’ until proven otherwise. It’s freely viewable content if you’re in Australia. They’ve obviously stuffed up on multiple fronts, and my money is on these issues being introduced by an integrator, rather than ABC employee.
Lastly, the ABC is a corporate entity that is fully owned by the commonwealth (and beloved by most Australians) - tue article describes it as ‘state media’, which has sinister propaganda connotations of broadcasters in some other countries.
> Lastly, the ABC is a corporate entity that is fully owned by the commonwealth (and beloved by most Australians) - tue article describes it as ‘state media’, which has sinister propaganda connotations of broadcasters in some other countries.
I’d compare ABC to PBS in America. They both are state media. I think more liberal usage of accurate terms in this way is needed. Folks ought to know who funded and produced the reporting that they consume, especially when it’s those same folks doing the funding. It seems like you’re saying that calling it “state media” is something like spreading FUD, but to object to it being said in neutral terms entirely? I don’t see who benefits from leaving that info out either.
I think the larger issues is these terms being used as dog whistles for propaganda in the first place. Those who have the power to label others as propaganda are not themselves subject to such labeling. Curious.
Both PBS and ABC are independent of the ‘state’ though. They’re public broadcasters funded by public funds, which are administered by a federal agency (Dep of Industry in Australia).
It's true in terms of Bernard Collaery / Witness K, but this is not the same at all. The entity being "hacked" is the ABC, which is an independent broadcaster that the government of the day is constantly trying to claim is biased against it. There is no reason to think that the CDPP or the Attorney General would try to defend the ABC from something like this. If anything they would point and laugh, and use it to justify another huge budget cut.
In terms of responding to criticism in the digital realm, you can get a pretty illustrative view of the situation by looking at how the Digital Transformation Office handled feedback to its pile of steaming shit of a coronavirus proximity alert app. Numerous researchers found serious flaws constantly from its first release. It took like 6 months for them to even recognise that any outside researchers had even been helpful, IIRC. Release notes were like "fixed bugs" and then the researchers decompile the .JAR again and say "nope, you absolutely did not fix this huge problem" and then find another one. Meanwhile the government sank millions more dollars into BCG consulting to review it, while these people were working for free and getting no credit. I think their campaign to cast doubt on the app's safety, efficacy and security was successful, and it did not get taken up, and then the report that was due to be published about the project was never released, and it was all swept under the rug. Overall I think it was about $10 million spent for in the vicinity of 5 covid cases identified, I don't remember exactly but I think all of them were also identified by phone interviews.
They did not, however, prosecute the researchers. So, ignorant and unkind, but mostly not like that journalist in the US who enraged a governor to the point of being criminally investigated for opening a webpage.
To be fair, I think a lot of developers begin with that. There is a logistical problem in providing secrets to a process without getting the secret exposed. Environment variables are an often chosen approach. Of course when the software is tested and ready to be deployed, the step to use a secure container containing credentials is often neglected like it was probably done here. This isn't necessarily sloppy programming, it is just skipping an essential step.
How do you provide your secrets to your apps? Using an external service? That would still require another set of credentials. Using environment variables? A file only the user running the app has access too? Another way?
According to the article, they were keeping their environment variables in React's local state. To anyone that works with React professionally, or even on the side, this is so baffling that a team would do this.
I'm honestly wondering who they hired for the job. Because this is one of the most fundamental failings in security I've ever seen.
Is is a bit more nuanced than that. This web client needs access to non-secret keys that are passed via environment variables. This is absolutely commonplace.
However there are two real issues here: first, some bug in the code is causing all environment variables to be dumped into the JS bundle. You can see that in inane keys like PATH, HOME, PORT.
This issue wouldn't be such a huge problem by itself. The second problem is that during the build process for the frontend, there are environment variables that shouldn't be there, such as the secret ones. The CI or build machine should have been well isolated enough to prevent this problem from happening.
This is a problem in the CI coupled with a bad build process.
Yep. Maybe just forbidding enumerating environment variables in the runtime would be enough for this case.
However the CI shouldn’t have backend-only private variables available to frontend builds… some separation here would be safer regardless of developer mistakes.
Yeah, it shouldn't reach the client in any case. But providing secrets to applications isn't really a well solved problem in my opinion. Even if it is just an environment variable for the server process it could get exposed.
If a clients needs an API key I would think to route the requests through the server and add the key information at that point, but I am not a web developer and not sure if that always scales for any use case.
It's simply software engineering malpractice to have ever sent any of those keys to the client.
There is no excuse.
It is a well-solved problem to handle secrets; there are better and worse solutions. An environment variable for a server can get exposed if the server is hacked; a secret sent to a client is exposed the second the server goes live. One of these is much worse than the other.
There are also better solutions than environment variables. A competent team would be aware of many options. Whoever coded this is not competent, full stop. It's not that they didn't finish; these services should never have accessed from the client at all.
> How do you provide your secrets to your apps? Using an external service? That would still require another set of credentials. Using environment variables? A file only the user running the app has access too? Another way?
It sucks for a small team or for anyone who is trying to run a free tier, but terraform plus aws secrets manager or vault works really well. Using a db password as an example, for our app we generate a random password, store it in secrets manager, and our containers on fargate run with an iam role that allows access to that secret. Our state is stored in S3, and the infra is applied on commit to main with a terraform plan run on the merge request to main.
Our biggest security vector is always going to be someone using elevated credentials to access something, but this way there is no state on a developers machine at any point for any of our production infra.
> How do you provide your secrets to your apps? Using an external service? That would still require another set of credentials. Using environment variables? A file only the user running the app has access too? Another way?
A credential/key storage service, either on device/server or as a separate device, with IAM to control whether the user executing that process can use that secret or not. The user in this case for prod services should be a prod (non-human) user.
When all your services are like this, no person should have direct access. You generate key/secrets depending which services you want to allow to communicate with each other and the keys live in the key store. For secrets for external services, e.g. API keys, someone would have to enter them once, yes.
You also should try not to rely on secrets, rather invest in proper authentication/authorization logic.
There's a chicken and egg problem here. If you move your secrets to a secret management service, how do you provide the credentials to unlock to that? Whether it's on disk, in the environment or on an internal endpoint like IAM host roles there's ways for this to be exposed in the event of bugs or security vulnerabilities in your application
Shameless plug, but we built EnvKey[1] for exactly this purpose.
Instead of providing secrets directly to a server, you generate an ENVKEY token which is used to fetch and decrypt the app’s secrets/config and supply them as environment variables to the process.
The ENVKEY still needs to be protected, but you can limit which IPs can access it and if access is cut off, the process will be killed and secrets cleared immediately, so you do get some additional protection. All access attempts are also logged.
Environment variables on the server side, sure, but having those end up in the client...? If you find that this is a relatable mistake, I hope you have someone with more experience reviewing your code and processes before they touch anything remotely sensitive in production...
I'm not sure how bad this actually is. I haven't examined all the env variables exposed, but it's fairly common to expose public-facing api keys for services that require client-side communication with a 3rd party API. E.g. for client-side bug tracking, search etc.
Currently, most strange state keys seem to have been removed. When you check the web archive (http://web.archive.org/web/20211201000716/https://iview.abc....) though, you can see variables like "USER": "www-data", "HOME": "/var/www" and "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin". There's something called "DRM_WEB_SECRET" which isn't currently in the HTML anymore, I'm guessing they shouldn't have shared that.
And yet it's incredibly common. I would guess 95% of users of services like Firebase, Supabase, Algolia, Sentry, Segment, Cloudinary, Auth0... do this, because it's the point and officially endorsed. They are intended to be "frontend safe" to an extent. Not against service abuse/scraping maybe, but against actual RCE, unauthorized actions or unauthorized data access.
I guess you can proxy all that, but then what do you that the third party couldn't be doing for you? Can the user not accomplish the same thing through your proxy that they could through the key? It'll be easier to drop some requests than it is to revoke an API key I guess. You could use client-certificates, stuff along the lines of Cloudflare's API Shield etc but I would guess that only the top 5% of applications do this.
I've certainly done it/am doing it right now, which is likely why I'm writing such a defensive comment.
Honestly if an enterprising user/developer wants to do something like dump all data that is already accessible to them, more power to them.
Usually the public facing keys are restricted in a number of ways to help prevent abuse. E.g. they'll have strict rate limiting, fine-tunable scopes, domain restricted, can only be used in conjunction with a server-side secret.
E.g. Stripe has a publishable key and a secret key. The publishable key links the checkout session to a particular Stripe account, but you can't actually initiate a checkout session without setting a session ID from the server (which requires the secret key). If the 2 keys don't belong to the same account then the checkout session will fail.
Yes, with some services you can proxy requests via your own service. But how is this more secure? If anything you've just increased the potential attack surface.
If your visitors are making requests to SaaS APIs on your behalf, how can a SaaS identify the visitors belong to you without a key?
In general if a SaaS has a client-side SDK, they’ve designed around this and give you an API key just for the client bundle. It has only the permissions required for the client SDK, which - yes, could give a client the ability to run up your bill. But you could say the same about any usage based service. It’s up to you and the service to mitigate against that.
I’m not familiar with every variable in the screenshot from this blog post. Of those I’m familiar with, I don’t see any secrets in there.
Part of the selling point of Algolia is that they handle the misuse mitigation for search, and running queries directly through their API using public facing keys is the enabling feature for that.
The way to hide your keys would be to bounce the search through a proxy server that then does the API call on behalf of the user, but you're really not gaining anything by doing that, and now you're on the hook for traffic misuse mitigation.
Most APIs designed to be used this way also whitelist your token to your site domain, so they can't just steal your key and use it on their own site. Regardless, Algolia and services like it fully expect to be exposed to the full force of the internet at all times, so there's really nothing your API key can do that they're not already covering their bases for.
I'm not a web developer but this stuff, from the outside looking in, is reason enough for me to avoid. Obviously I'm no expert but this sort of stuff happens enough that I wouldn't even know where to start to learn this stuff properly.
The disappointing thing is that preventing this sort of thing doesn't take an expert. If you wouldn't post a secret to Reddit or Twitter, don't send that secret to a browser client. That's like webdev 101 stuff.
*.id.au is an interesting domain that I haven’t seen before. Apparently you can get an id.au iff you’re an Australian citizen, and it must approximately match your real name.
From my experience it's very common with infosec students in Australia, the general public barely know it exists and probably consider it a weird second level domain.
It doesn't need to be anything like your real name, much like only needing a registered aus company to own a *.com.au
It's a 2ld for individuals, much like .com.au is a 2ld for companies. In this case, it's not actually supposed to match your real name, but nicknames are allowed.
The Algolia side is required and expected, no? I know you can hide said details to be even safer, but it's expected to have the API public tokens available to the client so they can use the API from your site. The keys shouldn't work on other sites since the API will whitelist your application URL, so stealing them is pointless.
Yes, and that can't be avoided. Best you can hope for is that Algolia have their own rate limiting in place that will kick in if someone starts scraping their API with our API key.
The ABC is underfunded and under attack from the government.
Even if they do have software issues I wouldn't be publicly running attacks against them cause it would just give the government more reason to cut their funding more.
Likely the developers at the ABC are doing the best they can with the limited resources they have, and deserve our support rather than doing the Internet mob attack thing.
I wouldn't be surprised if there's a significant number of small or large deployments out there that use a nodejs build such as webpack, that has pulled in some kind of JSON configuration file with prod keys exposed hidden in those huge bundles.
Also, almost every Android app in Google Play has all the Firebase API keys exposed in their manifests, and it's really easy to retrieve them from APK/AAB's.
Agile did not save their asses. Sprint planning, grooming, modern frameworks, all the trappings of modernity but crucially no one present to say "secret key in frontend is a no-no"
The reason I react (no pun intended haha) this way is because I have seen with my own eyes blue chip (US -- is that the right expression for "company name known quite literally the world over"?) companies doing exactly these sort of things.
I have seen the plastering of webpacked-gulpified-obfuscated-minified-hoistpropified-younamedifiedit mind-blowing amounts of Javascript to create the latest "cool" app.
I have seen web pages best described as "this webpage is so heavy not even light can escape it". The (not full) cast, not in order of appearance:
. 5MB+ of Javascript (not kidding) -- _after_ the above
. gazillions of HTTP requests
. internal data bled through to the outside world (not always problematic from a security point of view, but one may as well: 1) save on the bytes 2) not give malicious ideas to you-know-who)
. code coverage is inversely proportional: with grand score hovering around 3-5%.....
End result:
. from inside: a bloated over-engineered chaos (see entry: "Big Ball of Mud")
. from outside: catastrophic load times of around 35 seconds w/o a primed cache.
Talk about the latest shiny new "single page application".....
Plenty of server-rendered apps have been caught putting private data in responses. In this very same comment section people have mentioned the story of a state website that included teacher SSN's in hidden fields[1], and OJFord shared a story of a server that included a full env var dump in error messages[2].
This sort of thing happens all the time to all sorts of services. Rather than just blaming JS, it's far more productive to think of technical controls that could catch this. For example Taint Checking[3] or scanning server responses for API keys.
The other stations have all since caught up, but ABC have a tremendous amount of quality children's content so it's a very popular service with families.
However, the current government is not a fan of funding the ABC and as such they've been operating with a very tight and reducing budget for most of the last decade. The edges of their products including interactive news pieces and election/pandemic coverage (and some of these API key issues) are a bit rough but the overall achievement is excellent.