Happy holidays from AO3!

sarking:

sarking:

ao3org:

To celebrate the spirit of giving and gathering, we’re providing 8 invitations to all users who have been with us 6 months or more and

  • posted at least 1 work, or
  • left at least 5 comments, or
  • given at least 10 kudos

Since we’re generating a lot of invitations (over 7 million!), it might take a few days for them to arrive in your account, so don’t worry if you haven’t gotten them just yet! You can follow these instructions to access and share your invite codes with anyone who wants an account.

Happy Hanukkah!

Me, not thinking about how many users we have: last year we sent holiday invitations on Christmas… hey, right now it’s Hanukkah! Let’s make it eight!

Also! Why you might want to hit someone up for an invitation even if you don’t create fanworks:

  • Bookmark things! Even privately! Because sometimes you really want to return to that particular piece of filth over and over but you don’t want anyone to know. >.>
  • Change how the site looks! For example, the Reversi skin provides a dark grey background with white text, perfect for ~bedtime reading~, as 99% of our Twitter followers refer to it when we go down at certain hours. 
    • There’s even a moderately useful skin wizard to make it easier to tweak the display if you don’t know CSS and there isn’t a skin that fits your needs. (It’s okay; I’m allowed to call it “moderately useful” because I’m the one who rewrote the code. I know where it’s lacking.)
  • Leave kudos with your name attached! (I mean, maybe you don’t want this, depending on your reading habits. But maybe you do!)
  • Subscribe to your favorite WIPs, series, or creators to get email notifications of updates! Just… don’t forget to check your spam folder or your Social tab if you use Gmail, because sometimes they end up there.
  • Keep track of what you’ve read and want to read with History and Marked for Later! A random selection of three works you’ve Marked for Later will even sit there on the homepage and judge you silently when you go I have nothing to reeeead.

dexer-von-dexer:

danshive:

In science fiction, AIs tend to malfunction due to some technicality of logic, such as that business with the laws of robotics and an AI reaching a dramatic, ironic conclusion.

Content regulation algorithms tell me that sci-fi authors are overly generous in these depictions.

“Why did cop bot arrest that nice elderly woman?”

“It insists she’s the mafia.”

“It thinks she’s in the mafia?”

“No. It thinks she’s an entire crime family. It filled out paperwork for multiple separate arrests after bringing her in.”

I have to comment on this because this is touching on something I see a lot of people (including Tumblr staff and everyone else who uses these kind of deep learning systems willy-nilly like this) don’t quite get: “Deep Reinforcement Learning” AI like these engage with reality in a fundamentally different way from humans. I see some people testing the algorithm and seeing where the “line” is, wondering whether it looks for things like color gradients, skin tone pixels, certain shapes, curves, or what have you. All of these attempts to understand the algorithm fail because there is nothing to understand. There is no line, because there is no logic. You will never be able to pin down the “criteria” the algorithm uses to identify content, because the algorithm does not use logic at all to identify anything, only raw statistical correlations on top of statistical correlations on top of statistical correlations. There is no thought, no analysis, no reasoning. It does all its tasks through sheer unconscious intuition. The neural network is a shambling sleepwalker. It is madness incarnate. It knows nothing of human concepts like reason. It will think granny is the mafia.

This is why a lot of people say AI are so dangerous. Not because they will one day wake up and be conscious and overthrow humanity, but that they (or at least this type of AI) are not and never will be conscious, and yet we’re relying on them to do things that require such human characteristics as logic and any sort of thought process whatsoever. Humans have a really bad tendency to anthropomorphize, and we’d like to think the AI is “making decisions” or “thinking,” but the truth is that what it’s doing is fundamentally different from either of those things. What we see as, say, a field of grass, a neural network may see as a bus stop. Not because there is actually a bus stop there, or that anything in the photo resembles a bus stop according to our understanding, but because the exact right pixels in the photo were shaded in the exact right way so that they just so happened to be statistically correlated with the arbitrary functions it created when it was repeatedly exposed to pictures of bus stops over and over. It doesn’t know what grass is, what a bus stop is, but it sure as hell will say with 99.999% certainty that one is in fact the other, for reasons you can’t understand, and will drive your automated bus off the road and into a ditch because of this undetectable statistical overlap. Because a few pixels were off in just the right way in just the right places and it got really, really confused for a second.

There, I even caught myself using the word “confused” to describe it. That’s not right, because “confused” is a human word. What’s happening with the AI is something we don’t have the language to describe.

Anyway what’s more, this sort of trickery can be mimicked. A human wouldn’t be able to figure it out, but another neural network can easily guess the statistical filters it uses to identify things and figure out how to alter images with some white noise in exactly the right way to make the algorithm think it’s actually something else. It’ll still look like the original image, just with some pixelated artifacts, but the algorithm will see it as something completely different. This is what’s known as a “single pixel attack.” I am fairly confident porn bot creators might end up cracking the content flagging algorithm and start putting up some weirdly pixelated porn anyway, and all of this will be in vain. All because Tumblr staff decided to rely on content moderation via slot machine.

TL;DR bots are illogical because they’re actually unknowable eldritch horrors made of spreadsheets and we don’t know how to stop them or how they got here, send help

dexer-von-dexer:

danshive:

In science fiction, AIs tend to malfunction due to some technicality of logic, such as that business with the laws of robotics and an AI reaching a dramatic, ironic conclusion.

Content regulation algorithms tell me that sci-fi authors are overly generous in these depictions.

“Why did cop bot arrest that nice elderly woman?”

“It insists she’s the mafia.”

“It thinks she’s in the mafia?”

“No. It thinks she’s an entire crime family. It filled out paperwork for multiple separate arrests after bringing her in.”

I have to comment on this because this is touching on something I see a lot of people (including Tumblr staff and everyone else who uses these kind of deep learning systems willy-nilly like this) don’t quite get: “Deep Reinforcement Learning” AI like these engage with reality in a fundamentally different way from humans. I see some people testing the algorithm and seeing where the “line” is, wondering whether it looks for things like color gradients, skin tone pixels, certain shapes, curves, or what have you. All of these attempts to understand the algorithm fail because there is nothing to understand. There is no line, because there is no logic. You will never be able to pin down the “criteria” the algorithm uses to identify content, because the algorithm does not use logic at all to identify anything, only raw statistical correlations on top of statistical correlations on top of statistical correlations. There is no thought, no analysis, no reasoning. It does all its tasks through sheer unconscious intuition. The neural network is a shambling sleepwalker. It is madness incarnate. It knows nothing of human concepts like reason. It will think granny is the mafia.

This is why a lot of people say AI are so dangerous. Not because they will one day wake up and be conscious and overthrow humanity, but that they (or at least this type of AI) are not and never will be conscious, and yet we’re relying on them to do things that require such human characteristics as logic and any sort of thought process whatsoever. Humans have a really bad tendency to anthropomorphize, and we’d like to think the AI is “making decisions” or “thinking,” but the truth is that what it’s doing is fundamentally different from either of those things. What we see as, say, a field of grass, a neural network may see as a bus stop. Not because there is actually a bus stop there, or that anything in the photo resembles a bus stop according to our understanding, but because the exact right pixels in the photo were shaded in the exact right way so that they just so happened to be statistically correlated with the arbitrary functions it created when it was repeatedly exposed to pictures of bus stops over and over. It doesn’t know what grass is, what a bus stop is, but it sure as hell will say with 99.999% certainty that one is in fact the other, for reasons you can’t understand, and will drive your automated bus off the road and into a ditch because of this undetectable statistical overlap. Because a few pixels were off in just the right way in just the right places and it got really, really confused for a second.

There, I even caught myself using the word “confused” to describe it. That’s not right, because “confused” is a human word. What’s happening with the AI is something we don’t have the language to describe.

Anyway what’s more, this sort of trickery can be mimicked. A human wouldn’t be able to figure it out, but another neural network can easily guess the statistical filters it uses to identify things and figure out how to alter images with some white noise in exactly the right way to make the algorithm think it’s actually something else. It’ll still look like the original image, just with some pixelated artifacts, but the algorithm will see it as something completely different. This is what’s known as a “single pixel attack.” I am fairly confident porn bot creators might end up cracking the content flagging algorithm and start putting up some weirdly pixelated porn anyway, and all of this will be in vain. All because Tumblr staff decided to rely on content moderation via slot machine.

TL;DR bots are illogical because they’re actually unknowable eldritch horrors made of spreadsheets and we don’t know how to stop them or how they got here, send help

sounddesignerjeans:

tittyhonker:

this clown boy (actual clown literally from a circus) was like ‘ya u can pick me up from the max station after work’ then when i got off i said ‘ok tell me how long it’ll take u to get there and i’ll be there’ 2 hours ago and i haven’t heard back.. smh my first mistake was fuckin with a carless bum but his dick is big and i just really want to Fuck a Clown. also we exhanged explicit pics thru email because he doesn’t have a phone