Trying an Experiment

Usually when I get addicted to something, obsessing over it for a week or two is enough to get it out of my system. My Minecraft addiction has been going strong for a month and a half, though, with [no] signs of abating. The time I used to use to daydream posts has instead been turned to testing the best design for a lava blade or roughing up a flexible design for a mountain home in Blender.

When pondering what to do about the situation, though, I realized something: I’m a noob, but not a complete one. Yes, I can say “mob grinder” with a straight face, but I’ve also died tonnes of times in order to establish my base. I still don’t have any enchantments, I can count the number of iron bars I own on one hand, but much of that tardiness is because I’ve been more focused on finding the right location for a home. I’m not bragging about my amazing builds or redstone skillz, but I’m also not flailing around either, and I haven’t seen a lot of Minecraft players with that intermediate skill level.

I figure it might be watchable. And if I’m putting in the hours, I might as well try my hand at streaming the game. And so, I have. I’ve currently got five videos up, in fact, though I hear Twitch deletes old videos unless you fork over cash. If videos start disappearing, I’ll archive them on YouTube.

Until that point, here’s a quick overview of what I’m hoping to build and the constraints I’ve placed on myself, then four more videos where I start executing on it. Looking back on the series, I’m still a bit amazed at the pace of improvement, on a technical level. There’s no way I can keep that pace up, but at least I can always craft more mines.

Fundraising Update 1

TL;DR: We’re pretty much on track, though we also haven’t hit the goal of pushing the fund past $78,890.69. Donate and help put the fund over the line!

With the short version out of the way, let’s dive into the details. What’s changed in the past week and change?

import datetime as dt

import matplotlib.pyplot as pl

import pandas as pd
import pandas.tseries.offsets as pdto


cutoff_day = dt.datetime( 2020, 5, 27, tzinfo=dt.timezone(dt.timedelta(hours=-6)) )

donations = pd.read_csv('donations.cleaned.tsv',sep='\t')

donations['epoch'] = pd.to_datetime(donations['created_at'])
donations['delta_epoch'] = donations['epoch'] - cutoff_day
donations['delta_epoch_days'] = donations['delta_epoch'].apply(lambda x: x.days)

# some adjustment is necessary to line up with the current total
donations['culm'] = donations['amount'].cumsum() + 14723

new_donations_mask = donations['delta_epoch_days'] > 0
print( f"There have been {sum(new_donations_mask)} donations since {cutoff_day}." )
There have been 8 donations since 2020-05-27 00:00:00-06:00.

There’s been a reasonable number of donations after I published that original post. What does that look like, relative to the previous graph?

pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k')

pl.plot( donations['delta_epoch_days'], donations['culm'], '-',c='#aaaaaa')
pl.plot( donations['delta_epoch_days'][new_donations_mask], \
        donations['culm'][new_donations_mask], '-',c='#0099ff')

pl.title("Defense against Carrier SLAPP Suit")

pl.xlabel("days since cutoff")
pl.ylabel("dollars")
pl.xlim( [-365.26,donations['delta_epoch_days'].max()] )
pl.ylim( [55000,82500] )
pl.show()

An updated chart from the past year. New donations are in blue.

That’s certainly an improvement in the short term, though the graph is much too zoomed out to say more. Let’s zoom in, and overlay the posterior.

# load the previously-fitted posterior
flat_chain = np.loadtxt('starting_posterior.csv')


pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k')

x = np.array([0, donations['delta_epoch_days'].max()])
for m,_,_ in flat_chain:
    pl.plot( x, m*x + 78039, '-r', alpha=0.05 )
    
pl.plot( donations['delta_epoch_days'], donations['culm'], '-', c='#aaaaaa')
pl.plot( donations['delta_epoch_days'][new_donations_mask], \
        donations['culm'][new_donations_mask], '-', c='#0099ff')

pl.title("Defense against Carrier SLAPP Suit")

pl.xlabel("days since cutoff")
pl.ylabel("dollars")
pl.xlim( [-3,x[1]+1] )
pl.ylim( [77800,79000] )

pl.show()

A zoomed-in view of the new donations, with posteriors overlaid.

Hmm, looks like we’re right where the posterior predicted we’d be. My targets were pretty modest, though, consisting of an increase of 3% and 10%, so this doesn’t mean they’ve been missed. Let’s extend the chart to day 16, and explicitly overlay the two targets I set out.

low_target = 78890.69
high_target = 78948.57
target_day = dt.datetime( 2020, 6, 12, 23, 59, tzinfo=dt.timezone(dt.timedelta(hours=-6)) )
target_since_cutoff = (target_day - cutoff_day).days

pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k')

x = np.array([0, target_since_cutoff])
pl.fill_between( x, [78039, low_target], [78039, high_target], color='#ccbbbb', label='blog post')
pl.fill_between( x, [78039, high_target], [high_target, high_target], color='#ffeeee', label='video')

pl.plot( donations['delta_epoch_days'], donations['culm'], '-',c='#aaaaaa')
pl.plot( donations['delta_epoch_days'][new_donations_mask], \
        donations['culm'][new_donations_mask], '-',c='#0099ff')

pl.title("Defense against Carrier SLAPP Suit")

pl.xlabel("days since cutoff")
pl.ylabel("dollars")
pl.xlim( [-3, target_since_cutoff] )
pl.ylim( [77800,high_target] )

pl.legend(loc='lower right')
pl.show()

The previous graph, this time with targets overlaid.

To earn a blog post and video on Bayes from me, we need the line to be in the pink zone by the time it reaches the end of the graph. For just the blog post, it need only be in the grayish- area. As you can see, it’s painfully close to being in line with the lower of two goals, though if nobody donates between now and Friday it’ll obviously fall quite short.

So if you want to see that blog post, get donating!

Fundraising Target Number 1

If our goal is to raise funds for a good cause, we should at least have an idea of where the funds are at.

(Click here to show the code)
created_at amount epoch delta_epoch culm
0 2017-01-24T07:27:51-06:00 10.0 2017-01-24 07:27:51-06:00 -1218 days +19:51:12 14733.0
1 2017-01-24T07:31:09-06:00 50.0 2017-01-24 07:31:09-06:00 -1218 days +19:54:30 14783.0
2 2017-01-24T07:41:20-06:00 100.0 2017-01-24 07:41:20-06:00 -1218 days +20:04:41 14883.0
3 2017-01-24T07:50:20-06:00 10.0 2017-01-24 07:50:20-06:00 -1218 days +20:13:41 14893.0
4 2017-01-24T08:03:26-06:00 25.0 2017-01-24 08:03:26-06:00 -1218 days +20:26:47 14918.0

Changing the dataset so the last donation happens at time zero makes it both easier to fit the data and easier to understand what’s happening. The first day after the last donation is now day one.

Donations from 2017 don’t tell us much about the current state of the fund, though, so let’s focus on just the last year.

(Click here to show the code)

The last year of donations, for the lawsuit fundraiser.

The donations seem to arrive in bursts, but there have been two quiet portions. One is thanks to the current pandemic, and the other was during last year’s late spring/early summer. It’s hard to tell what the donation rate is just by eye-ball, though. We need to smooth this out via a model.
The simplest such model is linear regression, aka. fitting a line. We want to incorporate uncertainty into the mix, which means a Bayesian fit. Now, what MCMC engine to use, hmmm…. emcee is my overall favourite, but I’m much too reliant on it. I’ve used PyMC3 a few times with success, but recently it’s been acting flaky. Time to pull out the big guns: Stan. I’ve been avoiding it because pystan‘s compilation times drove me nuts, but all the cool kids have switched to cmdstanpy when I looked away. Let’s give that a whirl.

(Click here to show the code)
CPU times: user 5.33 ms, sys: 7.33 ms, total: 12.7 ms
Wall time: 421 ms
CmdStan installed.

We can’t fit to the entire three-year time sequence, that just wouldn’t be fair given the recent slump in donations. How about the last six months? That covers both a few donation burts and a flat period, so it’s more in line with what we’d expect in future.

(Click here to show the code)
There were 117 donations over the last six months.

With the data prepped, we can shift to building the linear model.

(Click here to show the code)

I could have just gone with Stan’s basic model, but flat priors aren’t my style. My preferred prior for the slope is the inverse tangent, as it compensates for the tendency of large slope values to “bunch up” on one another. Stan doesn’t offer it by default, but the Cauchy distribution isn’t too far off.

We’d like the standard deviation to skew towards smaller values. It naturally tends to minimize itself when maximizing the likelihood, but an explicit skew will encourage this process along. Gelman and the Stan crew are drifting towards normal priors, but I still like a Cauchy prior for its weird properties.

Normally I’d plunk the Gaussian distribution in to handle divergence from the deterministic model, but I hear using Student’s T instead will cut down the influence of outliers. Thomas Wiecki recommends one degree of freedom, but Gelman and co. find that it leads to poor convergence in some cases. They recommend somewhere between three and seven degrees of freedom, but skew towards three, so I’ll go with the flow here.

The y-intercept could land pretty much anywhere, making its prior difficult to figure out. Yes, I’ve adjusted the time axis so that the last donation is at time zero, but the recent flat portion pretty much guarantees the y-intercept will be higher than the current amount of funds. The traditional approach is to use a flat prior for the intercept, and I can’t think of a good reason to ditch that.

Not convinced I picked good priors? That’s cool, there should be enough data here that the priors have minimal influence anyway. Moving on, let’s see how long compilation takes.

(Click here to show the code)
CPU times: user 4.91 ms, sys: 5.3 ms, total: 10.2 ms
Wall time: 20.2 s

This is one area where emcee really shines: as a pure python library, it has zero compilation time. Both PyMC3 and Stan need some time to fire up an external compiler, which adds overhead. Twenty seconds isn’t too bad, though, especially if it leads to quick sampling times.

(Click here to show the code)
CPU times: user 14.7 ms, sys: 24.7 ms, total: 39.4 ms
Wall time: 829 ms

And it does! emcee can be pretty zippy for a simple linear regression, but Stan is in another class altogether. PyMC3 floats somewhere between the two, in my experience.

Another great feature of Stan are the built-in diagnostics. These are really handy for confirming the posterior converged, and if not it can give you tips on what’s wrong with the model.

(Click here to show the code)
Processing csv files: /tmp/tmpyfx91ua9/linear_regression-202005262238-1-e393mc6t.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-2-8u_r8umk.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-3-m36dbylo.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-4-hxjnszfe.csv

Checking sampler transitions treedepth.
Treedepth satisfactory for all transitions.

Checking sampler transitions for divergences.
No divergent transitions found.

Checking E-BFMI - sampler transitions HMC potential energy.
E-BFMI satisfactory for all transitions.

Effective sample size satisfactory.

Split R-hat values satisfactory all parameters.

Processing complete, no problems detected.

The odds of a simple model with plenty of datapoints going sideways are pretty small, so this is another non-surprise. Enough waiting, though, let’s see the fit in action. First, we need to extract the posterior from the stored variables …

(Click here to show the code)
There are 256 samples in the posterior.

… and now free of its prison, we can plot the posterior against the original data. I’ll narrow the time window slightly, to make it easier to focus on the fit.

(Click here to show the code)

The same graph as before, but now slightly zoomed in on and with trendlines visible.

Looks like a decent fit to me, so we can start using it to answer a few questions. How much money is flowing into the fund each day, on average? How many years will it be until all those legal bills are paid off? Since humans aren’t good at counting in years, let’s also translate that number into a specific date.

(Click here to show the code)
mean/std/median slope = $51.62/1.65/51.76 per day

mean/std/median years to pay off the legal fees, relative to 2020-05-25 12:36:39-05:00 =
	1.962/0.063/1.955

mean/median estimate for paying off debt =
	2022-05-12 07:49:55.274942-05:00 / 2022-05-09 13:57:13.461426-05:00

Mid-May 2022, eh? That’s… not ideal. How much time can we shave off, if we increase the donation rate? Let’s play out a few scenarios.

(Click here to show the code)
median estimate for paying off debt, increasing rate by   1% = 2022-05-02 17:16:37.476652800
median estimate for paying off debt, increasing rate by   3% = 2022-04-18 23:48:28.185868800
median estimate for paying off debt, increasing rate by  10% = 2022-03-05 21:00:48.510403200
median estimate for paying off debt, increasing rate by  30% = 2021-11-26 00:10:56.277984
median estimate for paying off debt, increasing rate by 100% = 2021-05-17 18:16:56.230752

Bumping up the donation rate by one percent is pitiful. A three percent increase will almost shave off a month, which is just barely worthwhile, and a ten percent increase will roll the date forward by two. Those sound like good starting points, so let’s make them official: increase the current donation rate by three percent, and I’ll start pumping out the aforementioned blog posts on Bayesian statistics. Manage to increase it by 10%, and I’ll also record them as videos.

As implied, I don’t intend to keep the same rate throughout this entire process. If you surprise me with your generosity, I’ll bump up the rate. By the same token, though, if we go through a dry spell I’ll decrease the rate so the targets are easier to hit. My goal is to have at least a 50% success rate on that lower bar. Wouldn’t that make it impossible to hit the video target? Remember, though, it’ll take some time to determine the success rate. That lag should make it possible to blow past the target, and by the time this becomes an issue I’ll have thought of a better fix.

Ah, but over what timeframe should this rate increase? We could easily blow past the three percent target if someone donates a hundred bucks tomorrow, after all, and it’s no fair to announce this and hope your wallets are ready to go in an instant. How about… sixteen days. You’ve got sixteen days to hit one of those rate targets. That’s a nice round number, for a computer scientist, and it should (hopefully!) give me just enough time to whip up the first post. What does that goal translate to, in absolute numbers?

(Click here to show the code)
a   3% increase over 16 days translates to $851.69 + $78039.00 = $78890.69

Right, if you want those blog posts to start flowing you’ve got to get that fundraiser total to $78,890.69 before June 12th. As for the video…

(Click here to show the code)
a  10% increase over 16 days translates to $909.57 + $78039.00 = $78948.57

… you’ve got to hit $78,948.57 by the same date.

Ready? Set? Get donating!

It’s Payback Time

I’m back! Yay! Sorry about all that, but my workload was just ridiculous. Things should be a lot more slack for the next few months, so it’s time I got back blogging. This also means I can finally put into action something I’ve been sitting on for months.

Richard Carrier has been a sore spot for me. He was one of the reasons I got interested in Bayesian statistics, and for a while there I thought he was a cool progressive. Alas, when it was revealed he was instead a vindictive creepy asshole, it shook me a bit. I promised myself I’d help out somehow, but I’d already done the obsessive analysis thing and in hindsight I’m not convinced it did more good than harm. I was at a loss for what I could do, beyond sharing links to the fundraiser.

Now, I think I know. The lawsuits may be long over, thanks to Carrier coincidentally dropping them at roughly the same time he came under threat of a counter-suit, but the legal bill are still there and not going away anytime soon. Worse, with the removal of the threat people are starting to forget about those debts. There have been only five donations this month, and four in April. It’s time to bring a little attention back that way.

One nasty side-effect of Carrier’s lawsuits is that Bayesian statistics has become a punchline in the atheist/skeptic community. The reasoning is understandable, if flawed: Carrier is a crank, he promotes Bayesian statistics, ergo Bayesian statistics must be the tool of crackpots. This has been surreal for me to witness, as Bayes has become a critical tool in my kit over the last three years. I suppose I could survive without it, if I had to, but every alternative I’m aware of is worse. I’m not the only one in this camp, either.

Following the emergence of a novel coronavirus (SARS-CoV-2) and its spread outside of China, Europe is now experiencing large epidemics. In response, many European countries have implemented unprecedented non-pharmaceutical interventions including case isolation, the closure of schools and universities, banning of mass gatherings and/or public events, and most recently, widescale social distancing including local and national lockdowns. In this report, we use a semi-mechanistic Bayesian hierarchical model to attempt to infer the impact of these interventions across 11 European countries.

Flaxman, Seth, Swapnil Mishra, Axel Gandy, H Juliette T Unwin, Helen Coupland, Thomas A Mellan, Tresnia Berah, et al. “Estimating the Number of Infections and the Impact of Non- Pharmaceutical Interventions on COVID-19 in 11 European Countries,” 2020, 35.

In estimating time intervals between symptom onset and outcome, it was necessary to account for the fact that, during a growing epidemic, a higher proportion of the cases will have been infected recently (…). Therefore, we re-parameterised a gamma model to account for exponential growth using a growth rate of 0·14 per day, obtained from the early case onset data (…). Using Bayesian methods, we fitted gamma distributions to the data on time from onset to death and onset to recovery, conditional on having observed the final outcome.

Verity, Robert, Lucy C. Okell, Ilaria Dorigatti, Peter Winskill, Charles Whittaker, Natsuko Imai, Gina Cuomo-Dannenburg, et al. “Estimates of the Severity of Coronavirus Disease 2019: A Model-Based Analysis.” The Lancet Infectious Diseases 0, no. 0 (March 30, 2020). https://doi.org/10.1016/S1473-3099(20)30243-7.

we used Bayesian methods to infer parameter estimates and obtain credible intervals.

Linton, Natalie M., Tetsuro Kobayashi, Yichi Yang, Katsuma Hayashi, Andrei R. Akhmetzhanov, Sung-mok Jung, Baoyin Yuan, Ryo Kinoshita, and Hiroshi Nishiura. “Incubation Period and Other Epidemiological Characteristics of 2019 Novel Coronavirus Infections with Right Truncation: A Statistical Analysis of Publicly Available Case Data.” Journal of Clinical Medicine 9, no. 2 (February 2020): 538. https://doi.org/10.3390/jcm9020538.

A significant chunk of our understanding of COVID-19 depends on Bayesian statistics. I’ll go further and argue that you cannot fully understand this pandemic without it. And yet thanks to Richard Carrier, the atheist/skeptic community is primed to dismiss Bayesian statistics.

So let’s catch two stones with one bird. If enough people donate to this fundraiser, I’ll start blogging a course on Bayesian statistics. I think I’ve got a novel angle on the subject, one that’s easier to slip into than my 201-level stuff and yet more rigorous. If y’all really start tossing in the funds, I’ll make it a video series. Yes yes, there’s a pandemic and potential global depression going on, but that just means I’ll work for cheap! I’ll release the milestones and course outline over the next few days, but there’s no harm in an early start.

Help me help the people Richard Carrier hurt. I’ll try to make it worth your while.

Timeline: Rachel Oates and EssenceOfThought

I’ve already covered some of this material, as has EoT, so you might be wondering why I’m repeating myself months after the events in question.

The old stuff hasn’t been well-organized nor placed in chronological order. My own efforts, for instance, were at the end of the second-half of a long blog post where I was pretty harsh on Rationality Rules. There’s room for a more dispassionate summary of the full context of what happened, especially if allegations about this “will be amplified by social media and echo for weeks, months, maybe years.” I’m pretty firmly on EoT’s side, but by minimizing my commentary in favour of direct quotes I can create a summary that Rachel Oates’ supporters will also find useful. The primary bias of this post will thus be via lies of omission, so I’ll try to be as comprehensive as possible. There’s also material that neither EoT nor I have mentioned, most of it focused on Rachel Oates’ side of the equation, so her point of view is better represented.

With that intro out of the way, let’s begin at the beginning. All dates and times are based on Twitter’s timestamp, which I think uses my timezone of Mountain Daylight Time, though it’ll be helpful to know about India Standard Time. Oh, and CONTENT WARNING for transphobia, plus mention of suicide and self-harm. [Read more…]

The Progressive Secular Alliance

I was a little amazed at how few people wanted the Atheist Experience blog to remain on FtB. I counted two people arguing for them to remain, one that was ignoring the contents of the original post, and the other had a history of transphobia themselves, before the thread inevitably descended into debating whether or not transgender women are women. The Atheist Community of Austin’s new board have trashed the organization’s prior reputation and destroyed people’s trust, and the odds of them rebuilding it are effectively zero thanks in part to Matt Dillahunty‘s shoddy leadership.

But I was also surprised that a name never came up. When any organization of that size undergoes this sort of scandal, it’s inevitable that some former members will branch off and form their own group. In this case, that group is the Progressive Secular Alliance. They currently have a YouTube channel and Facebook page. It’s still early days, but so far I’ve heard good feedback about them. If you’re an Austin-area atheist, give them a look, and even if you’re not remember that many of these people helped build and maintain the former ACA. Their content will likely be similar to that which drew you into being a fan of the ACA originally.

 

Some Much-Needed Follow-up

You can almost watch my opinion flip in real time.

September 25, 2014 at 11:57 am

If Benson made a habit of linking to TERF materials, even though she knew where they came from and had plenty of alternatives, I wouldn’t be so quick to defend her. But this is a single cartoon that is only problematic because of its source, and even then you had to either know TERF lingo or read carefully to discover the source was problematic. It should be entirely forgivable, at minimum, especially if Benson made it clear she didn’t endorse trans exclusion once she knew of the source. Which she did.

That some people aren’t willing to forgive this no matter what Benson does outs them as demanding perfection from imperfect beings. Only the most fanatic religious fundamentalists agree to that.

=====

September 25, 2014 at 9:38 pm

Not only does coded language allow you to get away with saying racist/sexist/classist things, you might trick non-racist/sexist/classist people into supporting you. I myself was thinking of sharing an image elsewhere, until I saw octopod raise the TERF flag and went “hmmm, I might be missing something here.” On and off over several hours, I scratched my head trying to work out what that was. “I suppose that one comic ‘reinderdijkhuis’ linked to made it explicit, but I didn’t spot anything else as bad. Though, now that I think of it, that rainbow comic looked like a coded message. And it was weird the masthead used the word ‘cotton’ but I’m HOLY SHIT HOW COULD I BE THAT BLIND….” […]

In that moment, a page I originally thought contained a mix of funny but heavily obscure comics was revealed to be a vicious cacophony of sexist dog whistles. EVERY comic was dripping with hate, but in some of them it was so carefully hidden that it looked like feminist commentary. Those could easily float around Facebook, with only a select few snickering over the true message being passed around. Imagine sharing an image that mocked Obama for being a warmonger, following the link to the source, and stumbling across a white supremacist website. If you were black, that would be horrific.

Hopefully that should explain why the image had to go, and why I was wrong to edge towards the “devil’s advocate” chair. My apologies for taking so long to clue in.

I was a latecomer to Ophelia Benson’s transphobia, other people had been aware of it for at least a year before my flip began. The whisper network had started talking, and I decided to listen. I owe a debt of gratitude to the people who helped me move from clueless to slightly-less-so, people like abbeycadabra, Janine, Xanthë, and Jason Thibeault. That also means I should take critiques from them seriously, as my understanding isn’t as far along as theirs.

Frankly, for a trans person, there’s something surreal and erasing in seeing cis people feuding with cis people over whether we exist. I mean, I am grateful that there are cis people being allies for us and pushing back against the transphobes (and homophobes and every other kind of -phobe.) But the fact that people have to come up with logical arguments and “evidence” that our transness is “real,” thus keeping the question alive of whether we do, in fact, exist, keeps giving me the creepy feeling that maybe I’m just a figment of my own imagination. I think the technical term is “depersonalization.”

It’s like when people run around “proving” that 1 = 0 — nobody sees any real need to “disprove” it, because it’s obvious that such a proof is BS. (It’s a reductio ad absurdum on the face of it.) But it seems like even those who believe in our existence feel the need to prove it. I was just reading HJ Hornbeck’s post about trans athletes, which has all kinds of “scientific,” “objective” evidence that gender dysphoria, gender identity, etc. are real. The problem with going down that path is not only that it concedes the possibility that it could be “disproven,” but also that trans people who don’t fit into the definitions and criteria in those “proofs” are then implicitly left out of the category “real trans.”

I was originally going to type up something in response, but after re-reading this comment that instinct feels mistaken. I agree with all of it, anything I add would just be restating something they said, and that would promote the idea that trans people’s opinions only have weight if cis people agree with them. So I’ll give Allison the final word.

This is BTW why I don’t like the idea of medical tests for transness, or proofs that trans people’s brains are observably different from cis people’s. Ultimately, being trans lies in one’s own understanding of oneself, gained through hard and painful experience. If I know based on my own experience of myself that “trans” best describes me, and some brain scan “proves” that I’m not, which am I to believe? (“Who are you gonna believe? Me? Or your own eyes?”) I spent most of my life ignoring my experience of myself and trying to live the way society told me I should, and it damned near killed me, and I think most trans people (at least we older trans people) have had the same experience.

A Year-End Wrap Up

… You know, I’ve never actually done one? They feel a bit self-indulgent, but having looked at the data I think there’s an interesting pattern here. Tell me if you can spot it, based on the eleven posts that earned the most traffic in 2018:

[Read more…]

Checking In

Goodness, it’s been longer than I thought.

One reason I’ve been silent is that I spent much of the last month grinding away on a paper. It’s basically a useful set of computational tools for a specific job, a minor improvement over existing techniques. Nothing too fancy, but as is typical for me it wound up snowballing into a LOT of work right before deadline. My advisor wanted more results, alas, so we blew past that deadline and are aiming for another venue in February.

Normally I would have popped back up after that, but as you may have noticed the Christmas season was upon us. Historically, it has been the hardest time of year for me. Worse, my life has taken a nose-dive over the last two months, and that plus some changes to my emotional support network could have combined to absolutely crush me.

It didn’t, which is still surprising even in hindsight. After a lifetime of battling depression, I’ve apparently gotten to the point where my  subconscious can organise self-care without my consciousness cluing in. That was a head-trip.

I haven’t fully dodged the emotional bullet, alas, but at least I’m functional enough to either hammer away at the worst I’m dealing with or sit patiently while it passes. It does mean I’ll be keeping to lighter topics and sparse posts on the blog, though, as I’m not centred enough to pull off longform rants unless I get really ticked off. Sorry to disappoint in that department, but hopefully this isn’t a permanent phase.