Let’s Talk Websites

I wish I’d written a post-mortem of my last disastrous hike. Not because it’s an opportunity to humble-brag about a time I hiked 43 kilometres, nor because these stories lead to compelling narratives, but because it’s invaluable for figuring out both what went wrong and how to fix it. As a bonus, it’s an opportunity to educate someone about the finer details of hiking.

Hence when it was suggested I do a post about FreethoughtBlog’s latest outage, I jumped on it relatively quickly. Unlike my hiking disasters, though, a lot of this coming second-hand via PZ and some detective work on my side, so keep a bit of skepticism handy.

[Read more…]

Part Three: Welcome to OUR Mastodon!

Are blogs dying off? The trend of setting up faux blogs to rig search results and/or soak in ad revenue suggests so. The rise of newer mediums, like video and social media, has also created powerful and more addictive alternatives that drain the life from blogging. However, it’s hard to keep a straight face during the eulogy when Substack and Medium are standing right there.

Over here at FtB HQ we’ve been hedging our bets, for instance with the YouTube channel we fired up a year ago. That wasn’t enough for me, so a few months ago I committed to the rather unoriginal idea of spinning up a Mastodon instance. After much tinkering with the innards and taking the thing for a few joyrides, I think it’s ready to go live. Hence, this post! [Read more…]

Get off Twitter NOW

[2022-12-9 HJH: If you caught this early, scroll to the bottom for an update.]

You remember Bari Weiss, right? She’s behind the “University” of Austin, an anti-woke school I haven’t discussed much but PZ has extensively covered. She’s also whined about COVID, complained about censorship of conservative voices at universities, but most of you likely learned of her from her fawning coverage of the “intellectual dark web.” Her resignation letter from the New York Times editorial board is exactly what you’d expect, given that background.

… a new consensus has emerged in the press, but perhaps especially at this paper: that truth isn’t a process of collective discovery, but an orthodoxy already known to an enlightened few whose job is to inform everyone else.

Twitter is not on the masthead of The New York Times. But Twitter has become its ultimate editor. As the ethics and mores of that platform have become those of the paper, the paper itself has increasingly become a kind of performance space. Stories are chosen and told in a way to satisfy the narrowest of audiences, rather than to allow a curious public to read about the world and then draw their own conclusions. I was always taught that journalists were charged with writing the first rough draft of history. Now, history itself is one more ephemeral thing molded to fit the needs of a predetermined narrative.

My own forays into Wrongthink have made me the subject of constant bullying by colleagues who disagree with my views. They have called me a Nazi and a racist; I have learned to brush off comments about how I’m “writing about the Jews again.” Several colleagues perceived to be friendly with me were badgered by coworkers. My work and my character are openly demeaned on company-wide Slack channels where masthead editors regularly weigh in. There, some coworkers insist I need to be rooted out if this company is to be a truly “inclusive” one, while others post ax emojis next to my name. Still other New York Times employees publicly smear me as a liar and a bigot on Twitter with no fear that harassing me will be met with appropriate action. They never are.

This received a bit of pushback from her peers at the time, which was rather remarkable given these were employees publicly critiquing their own boss. But I’m getting a bit distracted here, the key point is that back in 2020 Bari Weiss had a beef with Twitter. It was not only part of the woke left that was stifling conservative voices, in her opinion, it was the vector her employees used to slander her good name. I seriously doubt any of us paid much attention to that back in the day, as Twitter has long been a target of conservatives for allegations of “shadowbanning,” or reducing the visibility of certain tweets or Twitter users. Who cares about yet another conservative with a conspiracy-fueled grudge?

On Friday, a more unexpected sighting came in the form of Weiss, the conservative newsletter writer who was previously a New York Times opinion columnist. Weiss was in the San Francisco office that evening, speaking and “laughing with” Musk, two employees said.

By Saturday, Musk said Weiss would take part in releasing what he’s dubbed “the Twitter files,” so far consisting mainly of correspondence between Twitter employees and executives discussing their decision in 2020 to block access to a New York Post article detailing material on Hunter Biden’s stolen laptop. Now, Weiss has been given access to Twitter’s employee systems, added to its Slack, and given a company laptop, two people familiar with her presence said.

The level of access to Twitter systems given to Weiss is typically given only to employees, one of the people familiar said, though it doesn’t seem she is actually working at the company.

Oh. Oh dear. It gets worse, too! Remember the firing of James Baker? He was one of Twitter’s lead lawyers, until Matt Taibbi and Weiss realized who he was and accused him of preventing their full access of Twitter’s internal records. Which, of course he did! If you were going to give a third party extensive access to sensitive internal documents, you’d be daft not to have a lawyer present to ensure there’s no legal consequence. Which leaves us with the question: when Musk fired Baker, did he substitute in another lawyer to vet the access given to Weiss and Tabbi? Given his love of flouting the law, it’s a fair bet he did not. So it was basically inevitable a terrible situation would get worse.

A screenshot of Twitter's internal dashboard, showing details of the Libs Of TikTok's account.

This screenshot, shared by Weiss, set my hair on fire. Just by looking at it I can tell it’s an internal Twitter dashboard pointed at the Libs of TikTok account. Most of the identifying information has been cropped out, though that still leaves a lot behind. I now know Chaya Raichik uses a custom domain as her private Twitter email, which likely changed some time between April and December and is probably [something]@libsoftiktok.com. The image itself is a crop of a photo taken on an Apple phone on the evening of December 8th, so Raichik hadn’t been back on Twitter since she’d posted a tweet a day or two prior. Raichik has two strikes on her account, including a recent one for abusing people online; she has at least one alt account; and she’s blacklisted from trending on that platform, which is a good thing. Parker Malloy points out that, despite was Weiss says, this screenshot is evidence conservative accounts are given special treatment. The banner up top says that even if a Twitter mod thinks Libs Of TikTok has violated Twitter’s policies, that mod is not to take any action unless Twitter’s “Site Integrity Policy and Policy Escalation Support” team signs off on it. In other words Twitter has given Rachik a few Get-Out-Of-Jail-Free cards for policy violations, even though she’s a repeat offender.

Notice the faint text on the screen? Based on that, a former Twitter employee was able to conclude either Twitter’s current Vice President of Trust and Safety was logged in at the time, or someone with a similar level of access. Zoom in, and you’ll note the text follows the curve of the lens; in other words, that text was overlaid on the monitor and not the photo. Remember how Reality Winner was tracked down by the CIA because The Intercept didn’t purge the watermarks on a printed page? This is the same thing: by forcing the operating system to overlay this text on the screen, Twitter could track down anyone who leaked a screenshot or image of Twitter’s sensitive internal information. This isn’t an employee-only page Weiss is looking at, this is the equivalent of a Top-Secret document that the vast majority of Twitter staff aren’t trusted with. She’s one click away from learning when Raichik paid $8 for her verification mark, or what her email address is, or her phone number, or … reading all her private direct messages.

That, right there, is at least a two-alarm fire. About the only good news is that the person with this level of access is Bari Weiss. Sure, she could read the private messages of Democratic members of Congress, but her past in the media makes her unlikely to do much with that info. She’s probably not much of a threat, unless you’re a New York Times reporter.

Our team was given extensive, unfiltered access to Twitter’s internal communication and systems. One of the things we wanted to know was whether Twitter systemically suppressed political speech. Here’s what we found:

Abigail Shrier @ 5:28PM, December 8th 2022.

THAT is a four alarm-er. Abigail Shrier is a former lawyer, but after her 2020 book she’s become an anti-LGBT crusader testifying before the US Congress and peddling misinformation. She’s published private information in an effort to shut down an LGBT club at a school and attempted to get two teachers fired as a result. Thanks to her legal experience, she likely knows how to push the limits of what is considered legal. And now, if what she’s saying is accurate, she’s got the same level of access to Twitter as Bari Weiss. She could read the private messages of any LGBT person or group on the platform, or learn of their phone number or private email address.

I’m not prone to alarm, but this news has me trying to ring every alarm bell I can find. Get the fuck off Twitter, as soon as humanly possible. That may allow someone to impersonate you in one-to-twelve months, but that’s better than giving these assholes a chance to browse your private messages.

=====

Alas, in my panic to bang this blog post out ASAP, I missed some details.

eirwin4903ZWlyd21u863, repeated over and over on all the screenshots from that internal tool.
Dustin Miller @ 8:17 PM, December 8th 2022

this couldn’t possibly be new twitter head of trust and safety Ella Irwin (@ellagirwin) letting Bari Weiss rifle around in a backend tool that clearly says “Direct Messages” in the sidebar could it?
tom mckay @ 9:26 PM,  December 8th 2022

Correct. For security purposes, the screenshots requested came from me so we could ensure no PII was exposed. We did not give this access to reporters and no, reporters were not accessing user DMs.
Ella Irwin @ 10:22 PM, December 8th 2022

These watermarks are meant to prevent anonymous leaks. But usually this is for front-line people, like Customer Svc/tech support, etc. Weird it’d show up for the head of trust and safety, but elon is a paranoid dude.

Without any trustworthy explanation, this could be the head of trust/safety giving out her credentials for the non-production/testing environment. It looks so, so, so bad.
Eve @ 12:55 AM, December 9th 2022

I’ll give Ella Irwin the full benefit of the doubt. Even though she was hand-picked by Elon Musk to be the head of Twitter’s Trust and Safety team, she did not let any third party access direct messages or any other private or personal information of Twitter users. Can she prevent that from happening in future, though? I’ve already mentioned the firing of James Baker. Matt Taibbi described his sins thusly:

On Friday, the first installment of the Twitter files was published here. We expected to publish more over the weekend. Many wondered why there was a delay.

We can now tell you part of the reason why. On Tuesday, Twitter Deputy General Counsel (and former FBI General Counsel) Jim Baker was fired. Among the reasons? Vetting the first batch of “Twitter Files” – without knowledge of new management.

The process for producing the “Twitter Files” involved delivery to two journalists (Bari Weiss and me) via a lawyer close to new management. However, after the initial batch, things became complicated.

Over the weekend, while we both dealt with obstacles to new searches, it was @BariWeiss who discovered that the person in charge of releasing the files was someone named Jim. When she called to ask “Jim’s” last name, the answer came back: “Jim Baker.”

“My jaw hit the floor,” says Weiss.

As I pointed out earlier, there’s nothing odd about Twitter’s legal council pumping the brakes in this situation. There’s no evidence presented Baker was hiding or manipulating anything. Taibbi describes Baker as a “controversial figure” later in the thread, which is an odd way of phrasing “he didn’t say nice things about Trump and was partially involved in the FBI’s Russia investigation, which made the US far-right declare him to be an enemy.”

One thing I didn’t point out is that Bari Weiss publicly shared private messages made by Yoel Roth on Twitter’s internal Slack. Yoel Roth is also a “controversial figure” for the US far-right, which was reason enough for Weiss to violate his privacy. It’s not a large leap from sharing the private Slack messages of a “controversial figure” to sharing the private Twitter messages of a “controversial figure,” and given the positive reception Weiss has gotten for her “reporting” from the US far-right I figure it’s only a matter of time before she asks. Best case scenario, Irwin says “no,” the conflict is escalated to her boss Elon Musk, and he’s not in a firing mood.

Thing is, despite Irwin’s claim that there’s no personally identifying information in those photos, I’ve already shown there was. Not a lot, admittedly, but it doesn’t speak highly of Twitter’s new Trust and Safety head that she didn’t realize how much a photo can reveal. On top of that, remember that Weiss and Irwin were communicating with one another. Irwin could have explained what the photos actually showed, but either did not do that or did so and was ignored by Weiss. If the latter starts asking for Twitter DMs, I’m not convinced Irwin will give much pushback.

So while we may have dodged a bullet there, more shots are planned and I’m not convinced future ones will miss. My advice remains the same: get the fuck off Twitter, ASAP.

Fundraising Update 1

TL;DR: We’re pretty much on track, though we also haven’t hit the goal of pushing the fund past $78,890.69. Donate and help put the fund over the line!

With the short version out of the way, let’s dive into the details. What’s changed in the past week and change?

import datetime as dt

import matplotlib.pyplot as pl

import pandas as pd
import pandas.tseries.offsets as pdto


cutoff_day = dt.datetime( 2020, 5, 27, tzinfo=dt.timezone(dt.timedelta(hours=-6)) )

donations = pd.read_csv('donations.cleaned.tsv',sep='\t')

donations['epoch'] = pd.to_datetime(donations['created_at'])
donations['delta_epoch'] = donations['epoch'] - cutoff_day
donations['delta_epoch_days'] = donations['delta_epoch'].apply(lambda x: x.days)

# some adjustment is necessary to line up with the current total
donations['culm'] = donations['amount'].cumsum() + 14723

new_donations_mask = donations['delta_epoch_days'] > 0
print( f"There have been {sum(new_donations_mask)} donations since {cutoff_day}." )
There have been 8 donations since 2020-05-27 00:00:00-06:00.

There’s been a reasonable number of donations after I published that original post. What does that look like, relative to the previous graph?

pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k')

pl.plot( donations['delta_epoch_days'], donations['culm'], '-',c='#aaaaaa')
pl.plot( donations['delta_epoch_days'][new_donations_mask], \
        donations['culm'][new_donations_mask], '-',c='#0099ff')

pl.title("Defense against Carrier SLAPP Suit")

pl.xlabel("days since cutoff")
pl.ylabel("dollars")
pl.xlim( [-365.26,donations['delta_epoch_days'].max()] )
pl.ylim( [55000,82500] )
pl.show()

An updated chart from the past year. New donations are in blue.

That’s certainly an improvement in the short term, though the graph is much too zoomed out to say more. Let’s zoom in, and overlay the posterior.

# load the previously-fitted posterior
flat_chain = np.loadtxt('starting_posterior.csv')


pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k')

x = np.array([0, donations['delta_epoch_days'].max()])
for m,_,_ in flat_chain:
    pl.plot( x, m*x + 78039, '-r', alpha=0.05 )
    
pl.plot( donations['delta_epoch_days'], donations['culm'], '-', c='#aaaaaa')
pl.plot( donations['delta_epoch_days'][new_donations_mask], \
        donations['culm'][new_donations_mask], '-', c='#0099ff')

pl.title("Defense against Carrier SLAPP Suit")

pl.xlabel("days since cutoff")
pl.ylabel("dollars")
pl.xlim( [-3,x[1]+1] )
pl.ylim( [77800,79000] )

pl.show()

A zoomed-in view of the new donations, with posteriors overlaid.

Hmm, looks like we’re right where the posterior predicted we’d be. My targets were pretty modest, though, consisting of an increase of 3% and 10%, so this doesn’t mean they’ve been missed. Let’s extend the chart to day 16, and explicitly overlay the two targets I set out.

low_target = 78890.69
high_target = 78948.57
target_day = dt.datetime( 2020, 6, 12, 23, 59, tzinfo=dt.timezone(dt.timedelta(hours=-6)) )
target_since_cutoff = (target_day - cutoff_day).days

pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k')

x = np.array([0, target_since_cutoff])
pl.fill_between( x, [78039, low_target], [78039, high_target], color='#ccbbbb', label='blog post')
pl.fill_between( x, [78039, high_target], [high_target, high_target], color='#ffeeee', label='video')

pl.plot( donations['delta_epoch_days'], donations['culm'], '-',c='#aaaaaa')
pl.plot( donations['delta_epoch_days'][new_donations_mask], \
        donations['culm'][new_donations_mask], '-',c='#0099ff')

pl.title("Defense against Carrier SLAPP Suit")

pl.xlabel("days since cutoff")
pl.ylabel("dollars")
pl.xlim( [-3, target_since_cutoff] )
pl.ylim( [77800,high_target] )

pl.legend(loc='lower right')
pl.show()

The previous graph, this time with targets overlaid.

To earn a blog post and video on Bayes from me, we need the line to be in the pink zone by the time it reaches the end of the graph. For just the blog post, it need only be in the grayish- area. As you can see, it’s painfully close to being in line with the lower of two goals, though if nobody donates between now and Friday it’ll obviously fall quite short.

So if you want to see that blog post, get donating!

Fundraising Target Number 1

If our goal is to raise funds for a good cause, we should at least have an idea of where the funds are at.

(Click here to show the code)
created_at amount epoch delta_epoch culm
0 2017-01-24T07:27:51-06:00 10.0 2017-01-24 07:27:51-06:00 -1218 days +19:51:12 14733.0
1 2017-01-24T07:31:09-06:00 50.0 2017-01-24 07:31:09-06:00 -1218 days +19:54:30 14783.0
2 2017-01-24T07:41:20-06:00 100.0 2017-01-24 07:41:20-06:00 -1218 days +20:04:41 14883.0
3 2017-01-24T07:50:20-06:00 10.0 2017-01-24 07:50:20-06:00 -1218 days +20:13:41 14893.0
4 2017-01-24T08:03:26-06:00 25.0 2017-01-24 08:03:26-06:00 -1218 days +20:26:47 14918.0

Changing the dataset so the last donation happens at time zero makes it both easier to fit the data and easier to understand what’s happening. The first day after the last donation is now day one.

Donations from 2017 don’t tell us much about the current state of the fund, though, so let’s focus on just the last year.

(Click here to show the code)

The last year of donations, for the lawsuit fundraiser.

The donations seem to arrive in bursts, but there have been two quiet portions. One is thanks to the current pandemic, and the other was during last year’s late spring/early summer. It’s hard to tell what the donation rate is just by eye-ball, though. We need to smooth this out via a model.
The simplest such model is linear regression, aka. fitting a line. We want to incorporate uncertainty into the mix, which means a Bayesian fit. Now, what MCMC engine to use, hmmm…. emcee is my overall favourite, but I’m much too reliant on it. I’ve used PyMC3 a few times with success, but recently it’s been acting flaky. Time to pull out the big guns: Stan. I’ve been avoiding it because pystan‘s compilation times drove me nuts, but all the cool kids have switched to cmdstanpy when I looked away. Let’s give that a whirl.

(Click here to show the code)
CPU times: user 5.33 ms, sys: 7.33 ms, total: 12.7 ms
Wall time: 421 ms
CmdStan installed.

We can’t fit to the entire three-year time sequence, that just wouldn’t be fair given the recent slump in donations. How about the last six months? That covers both a few donation burts and a flat period, so it’s more in line with what we’d expect in future.

(Click here to show the code)
There were 117 donations over the last six months.

With the data prepped, we can shift to building the linear model.

(Click here to show the code)

I could have just gone with Stan’s basic model, but flat priors aren’t my style. My preferred prior for the slope is the inverse tangent, as it compensates for the tendency of large slope values to “bunch up” on one another. Stan doesn’t offer it by default, but the Cauchy distribution isn’t too far off.

We’d like the standard deviation to skew towards smaller values. It naturally tends to minimize itself when maximizing the likelihood, but an explicit skew will encourage this process along. Gelman and the Stan crew are drifting towards normal priors, but I still like a Cauchy prior for its weird properties.

Normally I’d plunk the Gaussian distribution in to handle divergence from the deterministic model, but I hear using Student’s T instead will cut down the influence of outliers. Thomas Wiecki recommends one degree of freedom, but Gelman and co. find that it leads to poor convergence in some cases. They recommend somewhere between three and seven degrees of freedom, but skew towards three, so I’ll go with the flow here.

The y-intercept could land pretty much anywhere, making its prior difficult to figure out. Yes, I’ve adjusted the time axis so that the last donation is at time zero, but the recent flat portion pretty much guarantees the y-intercept will be higher than the current amount of funds. The traditional approach is to use a flat prior for the intercept, and I can’t think of a good reason to ditch that.

Not convinced I picked good priors? That’s cool, there should be enough data here that the priors have minimal influence anyway. Moving on, let’s see how long compilation takes.

(Click here to show the code)
CPU times: user 4.91 ms, sys: 5.3 ms, total: 10.2 ms
Wall time: 20.2 s

This is one area where emcee really shines: as a pure python library, it has zero compilation time. Both PyMC3 and Stan need some time to fire up an external compiler, which adds overhead. Twenty seconds isn’t too bad, though, especially if it leads to quick sampling times.

(Click here to show the code)
CPU times: user 14.7 ms, sys: 24.7 ms, total: 39.4 ms
Wall time: 829 ms

And it does! emcee can be pretty zippy for a simple linear regression, but Stan is in another class altogether. PyMC3 floats somewhere between the two, in my experience.

Another great feature of Stan are the built-in diagnostics. These are really handy for confirming the posterior converged, and if not it can give you tips on what’s wrong with the model.

(Click here to show the code)
Processing csv files: /tmp/tmpyfx91ua9/linear_regression-202005262238-1-e393mc6t.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-2-8u_r8umk.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-3-m36dbylo.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-4-hxjnszfe.csv

Checking sampler transitions treedepth.
Treedepth satisfactory for all transitions.

Checking sampler transitions for divergences.
No divergent transitions found.

Checking E-BFMI - sampler transitions HMC potential energy.
E-BFMI satisfactory for all transitions.

Effective sample size satisfactory.

Split R-hat values satisfactory all parameters.

Processing complete, no problems detected.

The odds of a simple model with plenty of datapoints going sideways are pretty small, so this is another non-surprise. Enough waiting, though, let’s see the fit in action. First, we need to extract the posterior from the stored variables …

(Click here to show the code)
There are 256 samples in the posterior.

… and now free of its prison, we can plot the posterior against the original data. I’ll narrow the time window slightly, to make it easier to focus on the fit.

(Click here to show the code)

The same graph as before, but now slightly zoomed in on and with trendlines visible.

Looks like a decent fit to me, so we can start using it to answer a few questions. How much money is flowing into the fund each day, on average? How many years will it be until all those legal bills are paid off? Since humans aren’t good at counting in years, let’s also translate that number into a specific date.

(Click here to show the code)
mean/std/median slope = $51.62/1.65/51.76 per day

mean/std/median years to pay off the legal fees, relative to 2020-05-25 12:36:39-05:00 =
	1.962/0.063/1.955

mean/median estimate for paying off debt =
	2022-05-12 07:49:55.274942-05:00 / 2022-05-09 13:57:13.461426-05:00

Mid-May 2022, eh? That’s… not ideal. How much time can we shave off, if we increase the donation rate? Let’s play out a few scenarios.

(Click here to show the code)
median estimate for paying off debt, increasing rate by   1% = 2022-05-02 17:16:37.476652800
median estimate for paying off debt, increasing rate by   3% = 2022-04-18 23:48:28.185868800
median estimate for paying off debt, increasing rate by  10% = 2022-03-05 21:00:48.510403200
median estimate for paying off debt, increasing rate by  30% = 2021-11-26 00:10:56.277984
median estimate for paying off debt, increasing rate by 100% = 2021-05-17 18:16:56.230752

Bumping up the donation rate by one percent is pitiful. A three percent increase will almost shave off a month, which is just barely worthwhile, and a ten percent increase will roll the date forward by two. Those sound like good starting points, so let’s make them official: increase the current donation rate by three percent, and I’ll start pumping out the aforementioned blog posts on Bayesian statistics. Manage to increase it by 10%, and I’ll also record them as videos.

As implied, I don’t intend to keep the same rate throughout this entire process. If you surprise me with your generosity, I’ll bump up the rate. By the same token, though, if we go through a dry spell I’ll decrease the rate so the targets are easier to hit. My goal is to have at least a 50% success rate on that lower bar. Wouldn’t that make it impossible to hit the video target? Remember, though, it’ll take some time to determine the success rate. That lag should make it possible to blow past the target, and by the time this becomes an issue I’ll have thought of a better fix.

Ah, but over what timeframe should this rate increase? We could easily blow past the three percent target if someone donates a hundred bucks tomorrow, after all, and it’s no fair to announce this and hope your wallets are ready to go in an instant. How about… sixteen days. You’ve got sixteen days to hit one of those rate targets. That’s a nice round number, for a computer scientist, and it should (hopefully!) give me just enough time to whip up the first post. What does that goal translate to, in absolute numbers?

(Click here to show the code)
a   3% increase over 16 days translates to $851.69 + $78039.00 = $78890.69

Right, if you want those blog posts to start flowing you’ve got to get that fundraiser total to $78,890.69 before June 12th. As for the video…

(Click here to show the code)
a  10% increase over 16 days translates to $909.57 + $78039.00 = $78948.57

… you’ve got to hit $78,948.57 by the same date.

Ready? Set? Get donating!

Deep Penetration Tests

We now live in an age where someone can back door your back door.

Analysts believe there are currently on the order of 10 billions Internet of Things (IoT) devices out in the wild. Sometimes, these devices find their way up people’s butts: as it turns out, cheap and low-power radio-connected chips aren’t just great for home automation – they’re also changing the way we interact with sex toys. In this talk, we’ll dive into the world of teledildonics and see how connected buttplugs’ security holds up against a vaguely motivated attacker, finding and exploiting vulnerabilities at every level of the stack, ultimately allowing us to compromise these toys and the devices they connect to.

Writing about this topic is hard, and not just because penises may be involved. IoT devices pose a grave security risk for all of us, but probably not for you personally. For instance, security cameras have been used to launch attacks on websites. When was the last time you updated the firmware on your security camera, or ran a security scan of it? Probably never. Has your security camera been taken over? Maybe, as of 2017 roughly half the internet-connected cameras in the USA were part of a botnet. Has it been hacked and commanded to send your data to a third party? Almost certainly not, these security cam hacks almost all target something else. Human beings are terrible at assessing risk in general, and the combination of catastrophic consequences to some people but minimal consequences to you only amplifies our weaknesses.

There’s a very fine line between “your car can be hacked to cause a crash!” and “some cars can be hacked to cause a crash,” between “your TV is tracking your viewing habits” and “your viewing habits are available to anyone who knows where to look!” Finding the right balance between complacency and alarmism is impossible given how much we don’t know. And as computers become more intertwined with our intimate lives, whole new incentives come into play. Proportionately, more people would be willing to file a police report about someone hacking their toaster than about someone hacking their butt plug. Not many people own a smart sex toy, but those that do form a very attractive hacking target.

There’s not much we can do about this individually. Forcing people to take an extensive course in internet security just to purchase a butt plug is blaming the victim, and asking the market to solve the problem doesn’t work when market incentives caused the problem in the first place. A proper solution requires collective action as a society, via laws and incentives that help protect our privacy.

Then, and only then, can you purchase your sex toys in peace.

The Crisis of the Mediocre Man

I was browsing YouTube videos on PyMC3, as one naturally does, when I happened to stumble on this gem.

Tech has spent millions of dollars in efforts to diversify workplaces. Despite this, it seems after each spell of progress, a series of retrograde events ensue. Anti-diversity manifestos, backlash to assertive hiring, and sexual misconduct scandals crop up every few months, sucking the air from every board room. This will be a digest of research, recent events, and pointers on women in STEM.

Lorena A. Barba really knows her stuff; the entire talk is a rapid-fire accounting of claims and counterclaims, aimed to directly appeal to the male techbros who need to hear it. There was a lot of new material in there, for me at least. I thought the only well-described matriarchies came from the African continent, but it turns out the Algonquin also fit that bill. Some digging turns up a rich mix of gender roles within First Nations peoples, most notably the Iroquois and Hopi. I was also depressed to hear that the R data analysis community is better at dealing with sexual harassment than the skeptic/atheist community.

But what really grabbed my ears was the section on gender quotas. I’ve long been a fan of them on logical grounds: if we truly believe the sexes are equal, then if we see unequal representation we know discrimination is happening. By forcing equality, we greatly reduce network effects where one gender can team up against the other. Worried about an increase in mediocrity? At worst that’s a temporary thing that disappears once the disadvantaged sex gets more experience, and at best the overall quality will actually go up. The research on quotas has advanced quite a bit since that old Skepchick post. Emphasis mine.

In 1993, Sweden’s Social Democratic Party centrally adopted a gender quota and imposed it on all the local branches of that party (…). Although their primary aim was to improve the representation of women, proponents of the quota observed that the reform had an impact on the competence of men. Inger Segelström (the chair of Social Democratic Women in Sweden (S-Kvinnor), 1995–2003) made this point succinctly in a personal communication:

At the time, our party’s quota policy of mandatory alternation of male and female names on all party lists became informally known as the crisis of the mediocre man

We study the selection of municipal politicians in Sweden with regard to their competence, both theoretically and empirically. Moreover, we exploit the Social Democratic quota as a shock to municipal politics and ask how it altered the competence of that party’s elected politicians, men as well as women, and leaders as well as followers.

Besley, Timothy. “Gender Quotas and the Crisis of the Mediocre Man: Theory and Evidence from Sweden.” THE AMERICAN ECONOMIC REVIEW 107, no. 8 (2017): 39.

We can explain this with the benefit of hindsight: if men can rely on the “old boy’s network” to keep them in power, they can afford to slack off. If other sexes cannot, they have to fight to earn their place. These are all social effects, though; if no sex holds a monopoly on operational competence in reality, the net result is a handful of brilliant women among a sea of iffy men. Gender quotas severely limit the social effects, effectively kicking out the mediocre men to make way for average women, and thus increase the average competence.

As tidy as that picture is, it’s wrong in one crucial detail. Emphasis again mine.

These estimates show that the overall effect mainly reflects an improvement in the selection of men. The coefficient in column 4 means that a 10-percentage-point larger quota bite (just below the cross-sectional average for all municipalities) raised the proportion of competent men by 4.4 percentage points. Given an average of 50 percent competent politicians in the average municipality (by definition, from the normalization), this corresponds to a 9 percent increase in the share of competent men.

For women, we obtain a negative coefficient in the regression specification without municipality trends, but a positive coefficient with trends. In neither case, however, is the estimate significantly different from zero, suggesting that the quota neither raised nor cut the share of competent women. This is interesting in view of the meritocratic critique of gender quotas, namely that raising the share of women through a quota must necessarily come at the price of lower competence among women.

Increasing the number of women does not also increase the number of incompetent women. When you introduce a quota, apparently, everyone works harder to justify being there. The only people truly hurt by gender quotas are mediocre men who rely on the Peter Principle.

The like ratio for said talk. 47 likes, 55 dislikes, FYI.Alas, if that YouTube like ratio is any indication, there’s a lot of them out there.