this is a very simple 2-hidden layer neural network i built and (probably over-)trained with numpy and elbow grease based on the standard mnist handwritten digit dataset
(previously available here, but seems to now be missing). after a few hundred training iterations, the model performed at
roughly 90% on both training and testing datasets (the 'get a new testing image' pulls from the latter).
to be honest, it performs a good bit worse than i expected - it seems to really struggle with classifying drawn input even though it performs quite well on the testing dataset. i suspect
that a large part of that is narrowing down the appropriate stroke width and way i rasterize the image. currently, i take the proportion of filled
pixels on the drawing and use that to calculate the brightness of the corresponding grid cell which works much better, and adjustments to how the brightness
is scaled have seemed to increase performance a bit, but it still struggles (especially with 4s vs 9s, 2s vs 7s). i expect that trying to roughly center the
drawings would have a huge boost.