To me one important aspect is the existence of adversarially attacks on neural networks.
They essentially prove that the neural network never "understood" its data. It hasn't found some general categories which correspond somewhat to human categories.
Human brains can be tricked too, but never this way and never beyond our capacities for rational thought.
"Never this way and never beyond our capacities for rational thought" makes it so nothing can be stated about the "understanding" any neural network has about any data short of an AGI. Which is obviously not too useful because an incomplete model of something is not the same as not having the model at all. Eg. if the model is generating too many arms, that still means it has extracted some model of what an arm looks like, even if it hasn't fully internalized the fact that humans have up to 2 arms (although depending on the training set this also gets messy as it isn't uncommon for religious imagery of various religions to depict humanoid figures with several arms, where the difference is related to context not available to the AI).
Humans can be tricked very easily by optical illusions and it isn't uncommon for some illusions to be intentionally built to 'harm' people (eg patterns on the floor which make you lose your sense of balance). Even with rational thought such things can be difficult to deal with. We're probably just as vulnerable to adversarial attacks, the issue being that unlike artificial neural networks we don't have an easy feedback loop to run millions of times in guiding a similar adversarial search.
The main difference to optical illusions is that we are aware of them and integrate this knowledge into our model of the world, so that we can deal with them to a certain extend.
Con artists isn't a big problem, if it worked on everyone then you would have con artists become the richest persons in the world. Like, just con Elon Musk out of his billions, why hasn't anyone done that yet if it is so easy to trick humans?
> Like, just con Elon Musk out of his billions, why hasn't anyone done that yet if it is so easy to trick humans?
Like getting him to spend $44 billion for Twitter?
(An ex of mine is convinced that Musk is a con artist, but she's also a literal card carrying anarcho-communist; I'm not that cynical about Musk).
Even at a lower level, I had my bank[0] call up and tell me there was too much money in my account and they'd really recommend a wealth management consultation to avoid me being scammed, and that wasn't even £100k.
That said, I was thinking mainly of street cons — shell games, possibly even shoplifting and pickpocketing — as the previous discussion was about optical illusions. Business level scams are about a broader category of cognitive bias, and I'd say almost all gambling is that type of thing, likewise bitcoin, dulce et decorum est, and populist politics.
[0] or at least they said they were, but I said no before getting to the point where asking for proof the call wasn't itself a scam would've been useful
> Human brains can be tricked too, but never this way and never beyond our capacities for rational thought.
It's inconceivable to me that humans wouldn't be trickable by exactly the same sort of adversarial inputs-- it's just that because we're not differentiable there is no feasible way to find these inputs.
People have constructed fairly impressive optical illusions based on our understanding of the neural structure of the early stages of vision processing. The fact that we lack more complicated examples like "random images" that make us feel hate or disgust or that we're convinced are our mother is simply due to our lack of understanding and access to the higher neural structures.
I imagine that if the structure of a human brain was well-understood and mathematically describable the way NNs are, generating adversarial inputs would be completely feasible.
This is often a big part of how neural networks are trained. Take labelled photos (really it applies in abstract ways to other data) add a little noise, mirror, shrink and stretch, translate to a different area, etc. One piece of data becomes several training images.
And no, this standard practice does not eliminate adversaries.
No not while training, just ask the model to predict for a few transforms and get the mode. Simulating the fact that humans also have multiple frames worth of information about any object from slightly different angles
This is actually a thing, yes (and it does increase robustness to adversarial examples). This is particularly useful if you apply (random) low-pass filtering (or denoising, or DCT-based compression, or anything that messes up high-frequency content) to the image (besides random cropping/rescaling), since adversarial examples often rely on manipulating (human-imperceptible) high-frequency information.
I think probably that this kind of meta-understanding exists on a continuum where fruit flies and slime molds exist on one side and current AI exists somewhere in the lower third and we exist somewhere in the upper third and a future AI with a vast number of AI brain components and huge amounts of training will eventually exist at the other end, possibly as far from us as we are from the fruit fly.
Optical illusions is one thing, but, I don't know, "Predictably Irrational", "Thinking fast and slow" or just whatever is happening all around.
We do not understand our data.
In general yes, I believe most people will only accept thinking machine when it can reproduce all our pitfalls. Because if we see something and the computer doesn't, then it clearly still needs to be improved, even if it's an optical illusion.
But our bugs aren't sacred and special. They just passed Darwin's QA some thousands years ago.
I'd agree about sacred, but I have a hunch they may indeed be special… or at least useful. Current AI requires far more examples than we do to learn from, and I suspect all our biases are how evolution managed to do that.
Humans are trained on petabytes of data. From birth, we ingest sights, sounds, smells etc. Imagine a movie of every second of your life. And an audio track of every second of your life. Etc. Etc.
Tesla autopilot has a movie of every second it's active, for every car in the fleet that uses it. It has how many lifetimes of driving data now? And yet, it's… merely ok, nothing special, even when compared to all humans including those oblivious of the fact they shouldn't be behind a wheel.
Not sure "biases" gives the evolved structures in our brain enough credit. Maybe the functions of those structures could be emergent, in a large enough network, but that would be very different context to what a human sees in its extremely rapid development. The rapid learning could be from the unique architecture. The free running feedback loop (consciousness) that we have seems like a good example of how different our architecture is, with our ability to continuously prompt ourselves, and learn from those prompts.
Yeah but I disagreed about your point "Current AI requires far more examples than we do to learn from", since I think you need to count the amount of data that was seen by all your ancesters, maybe even starting from the first self-replicating molecule, billion years ago.
Fair enough. I wouldn't go quite that far — at absolute most I would accept since the first prototype of a neural cell — but the estimates I've seen for what data/training AGI would need if it requires a simulated re-run of modern human evolution to be trained is more than current (or at least recent) AI.
> Human brains can be tricked too, but never this way and never beyond our capacities for rational thought.
And yet we're living in the age of misinformation where propaganda spreads like never before. Isn't that essentially an adversarial attack which shows how susceptible humans are to them too?
For the record, i don't think that's NN are brains either. I think our issue to differentiate sufficiently between them is because we don't really know what sentience really is.
Human brains can be tricked too, but never this way and never beyond our capacities for rational thought.