Started restoring some old images & posts; changed themes to notepadium

This commit is contained in:
Chris Hodapp 2020-02-02 14:02:10 -05:00
parent 94f678534a
commit 7f98cee1da
84 changed files with 126 additions and 71 deletions

3
.gitmodules vendored
View File

@ -4,3 +4,6 @@
[submodule "hugo_blag/themes/nofancy"] [submodule "hugo_blag/themes/nofancy"]
path = hugo_blag/themes/nofancy path = hugo_blag/themes/nofancy
url = https://github.com/gizak/nofancy.git url = https://github.com/gizak/nofancy.git
[submodule "hugo_blag/themes/hugo-notepadium"]
path = hugo_blag/themes/hugo-notepadium
url = https://github.com/cntrump/hugo-notepadium.git

View File

@ -5,11 +5,22 @@ title = "My New Hugo Site"
#theme = "indigo" #theme = "indigo"
#theme = "zen" #theme = "zen"
# This one *does* use 'highlight' below: # This one *does* use 'highlight' below:
theme = "nofancy" #theme = "nofancy"
theme = "hugo-notepadium"
#PygmentsCodeFences = true
#PygmentsStyle = "monokai"
[params.math]
enable = true # optional: true, false. Enable globally, default:
# false. You can always enable math on per page.
# (how?)
use = "katex" # option: "katex", "mathjax". default: "katex"
[params] [params]
# See themes/nofancy/static/highlight/styles for available options # See themes/nofancy/static/highlight/styles for available options
highlight="tomorrow" #highlight="tomorrow"
# Controls what items are listed in the top nav menu # Controls what items are listed in the top nav menu
# "none", or "categories" # "none", or "categories"
# If you have too many categories to fit in the top nav menu, set this to "none" # If you have too many categories to fit in the top nav menu, set this to "none"
@ -29,3 +40,23 @@ theme = "nofancy"
# noClasses = true # noClasses = true
# style = "monokai" # style = "monokai"
# tabWidth = 4 # tabWidth = 4
[params.nav]
showCategories = true # /categories/
showTags = true # /tags/
[[params.nav.custom]]
title = "Posts"
url = "/posts"
[[params.nav.custom]]
title = "About"
url = "/about"
[[params.nav.custom]]
title = "Old Crap"
url = "/old_crap"
[[params.nav.custom]]
title = "Hugo"
url = "https://gohugo.io/"

View File

Before

Width:  |  Height:  |  Size: 409 KiB

After

Width:  |  Height:  |  Size: 409 KiB

View File

@ -16,7 +16,10 @@ implementation in a slow language for heavy computations
(i.e. Python), but it worked well enough to create some good results (i.e. Python), but it worked well enough to create some good results
like this: like this:
<!-- TODO: Originally:
[![Don't ask for the source code to this](../images/dla2c.png){width=50%}](../images/dla2c.png)\ [![Don't ask for the source code to this](../images/dla2c.png){width=50%}](../images/dla2c.png)\
-->
![Diffusion Limited Aggregation](./dla2c.png "Don't ask for the source code to this")
After about 3 or 4 failed attempts to optimize this program to not After about 3 or 4 failed attempts to optimize this program to not
take days to generate images, I finally rewrote it reasonably take days to generate images, I finally rewrote it reasonably

View File

Before

Width:  |  Height:  |  Size: 751 KiB

After

Width:  |  Height:  |  Size: 751 KiB

View File

Before

Width:  |  Height:  |  Size: 124 KiB

After

Width:  |  Height:  |  Size: 124 KiB

View File

@ -54,9 +54,15 @@ the 2nd image down below) or
[OpenFrameworks](http://www.openframeworks.cc/) or one of the [OpenFrameworks](http://www.openframeworks.cc/) or one of the
too-many-completely-different-versions of Acidity I wrote. too-many-completely-different-versions of Acidity I wrote.
<!-- TODO: Originals (get alt-text in?)
[![What I learned Bezier splines on, and didn&#39;t learn enough about texturing.](../images/hive13-bezier03.png){width=100%}](../images/hive13-bezier03.png) [![What I learned Bezier splines on, and didn&#39;t learn enough about texturing.](../images/hive13-bezier03.png){width=100%}](../images/hive13-bezier03.png)
[![This was made directly from some equations. I don't know how I'd do this in Blender.](../images/20110118-sketch_mj2011016e.jpg){width=100%}](../images/20110118-sketch_mj2011016e.jpg) [![This was made directly from some equations. I don't know how I'd do this in Blender.](../images/20110118-sketch_mj2011016e.jpg){width=100%}](../images/20110118-sketch_mj2011016e.jpg)
-->
![Hive13 bezier splines](./hive13-bezier03.png "What I learned Bezier splines on, and didn't learn enough about texturing.")
![Processing sketch](./20110118-sketch_mj2011016e.jpg "This was made directly from some equations. I don't know how I'd do this in Blender.")
[POV-Ray](http://www.povray.org) was the last program that I [POV-Ray](http://www.povray.org) was the last program that I
3D-rendered extensively in (this was mostly 2004-2005, as my 3D-rendered extensively in (this was mostly 2004-2005, as my
@ -92,6 +98,6 @@ all the precision that I would have had in POV-Ray, but I built them
in probably 1/10 the time. That's the case for the two in probably 1/10 the time. That's the case for the two
work-in-progress Blender images here: work-in-progress Blender images here:
[![This needs a name and a better background.](../images/20110131-mj20110114b.jpg){width=100%}](../images/20110131-mj20110114b.jpg) ![20110131-mj20110114b](./20110131-mj20110114b.jpg "This needs a name and a better background")
[![This needs a name and a better background.](../images/20110205-mj20110202-starburst2.jpg){width=100%}](../images/20110205-mj20110202-starburst2.jpg) ![20110205-mj20110202-starburst2](./20110205-mj20110202-starburst2.jpg "This needs a name and a better background.")

View File

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 154 KiB

View File

Before

Width:  |  Height:  |  Size: 291 KiB

After

Width:  |  Height:  |  Size: 291 KiB

View File

Before

Width:  |  Height:  |  Size: 188 KiB

After

Width:  |  Height:  |  Size: 188 KiB

View File

Before

Width:  |  Height:  |  Size: 220 KiB

After

Width:  |  Height:  |  Size: 220 KiB

View File

Before

Width:  |  Height:  |  Size: 235 KiB

After

Width:  |  Height:  |  Size: 235 KiB

View File

Before

Width:  |  Height:  |  Size: 274 KiB

After

Width:  |  Height:  |  Size: 274 KiB

View File

Before

Width:  |  Height:  |  Size: 282 KiB

After

Width:  |  Height:  |  Size: 282 KiB

View File

Before

Width:  |  Height:  |  Size: 282 KiB

After

Width:  |  Height:  |  Size: 282 KiB

View File

Before

Width:  |  Height:  |  Size: 284 KiB

After

Width:  |  Height:  |  Size: 284 KiB

View File

Before

Width:  |  Height:  |  Size: 286 KiB

After

Width:  |  Height:  |  Size: 286 KiB

View File

@ -48,9 +48,9 @@ expressing an identical image.
One command I've used pretty frequently before posting a large photo online is: One command I've used pretty frequently before posting a large photo online is:
```bash {{<highlight bash>}}
jpegtran -optimize -progressive -copy all input.jpg > output.jpg jpegtran -optimize -progressive -copy all input.jpg > output.jpg
``` {{< / highlight >}}
This losslessly converts *input.jpg* to a progressive version and This losslessly converts *input.jpg* to a progressive version and
optimizes it as well. (*jpegtran* can do some other things losslessly optimizes it as well. (*jpegtran* can do some other things losslessly
@ -108,7 +108,7 @@ Your libjpeg distribution probably contains something called
**wizard.txt** someplace (say, `/usr/share/docs/libjpeg8a` or **wizard.txt** someplace (say, `/usr/share/docs/libjpeg8a` or
`/usr/share/doc/libjpeg-progs`); I don't know if an online copy is `/usr/share/doc/libjpeg-progs`); I don't know if an online copy is
readily available, but mine is readily available, but mine is
[here](<../images/obscure_jpeg_features/libjpeg-wizard.txt>). I'll [here](<./libjpeg-wizard.txt>). I'll
leave detailed explanation of a scan script to the "Multiple Scan / leave detailed explanation of a scan script to the "Multiple Scan /
Progression Control" section of this document, but note that: Progression Control" section of this document, but note that:
@ -124,7 +124,7 @@ Progression Control" section of this document, but note that:
According to that document, the standard script for a progressive JPEG is this: According to that document, the standard script for a progressive JPEG is this:
```bash {{<highlight text>}}
# Initial DC scan for Y,Cb,Cr (lowest bit not sent) # Initial DC scan for Y,Cb,Cr (lowest bit not sent)
0,1,2: 0-0, 0, 1 ; 0,1,2: 0-0, 0, 1 ;
# First AC scan: send first 5 Y AC coefficients, minus 2 lowest bits: # First AC scan: send first 5 Y AC coefficients, minus 2 lowest bits:
@ -146,31 +146,36 @@ According to that document, the standard script for a progressive JPEG is this:
1: 1-63, 1, 0 ; 1: 1-63, 1, 0 ;
# Y AC lowest bit scan is last; it's usually the largest scan # Y AC lowest bit scan is last; it's usually the largest scan
0: 1-63, 1, 0 ;</pre> 0: 1-63, 1, 0 ;</pre>
``` {{< / highlight >}}
And for standard, sequential JPEG it is: And for standard, sequential JPEG it is:
```bash {{<highlight text>}}
0 1 2: 0 63 0 0; 0 1 2: 0 63 0 0;
``` {{< / highlight >}}
In In
[this image](../images/obscure_jpeg_features/20100713-0107-interleave.jpg) [this image](./20100713-0107-interleave.jpg)
I used a custom scan script that sent all of the Y data, then all Cb, I used a custom scan script that sent all of the Y data, then all Cb,
then all Cr. Its custom scan script was just this: then all Cr. Its custom scan script was just this:
```bash {{<highlight text>}}
0; 0;
1; 1;
2; 2;
``` {{< / highlight >}}
While not every browser may do this right, most browsers will render While not every browser may do this right, most browsers will render
the greyscale as its comes in, then add color to it one plane at a the greyscale as its comes in, then add color to it one plane at a
time. It'll be more obvious over a slower connection; I purposely left time. It'll be more obvious over a slower connection; I purposely left
the image fairly large so that the transfer would be slower. You'll the image fairly large so that the transfer would be slower. You'll
note as well that the greyscale arrives much more slowly than the note as well that the greyscale arrives much more slowly than the
color. color. (2020 note: most browsers will now let you use their
development tools to simulate a slow connection if you really want to
see.)
Code & Utilities Code & Utilities
================ ================
@ -179,16 +184,18 @@ The **cjpeg** tool from libjpeg will (among other things) create a
JPEG using a custom scan script. Combined with ImageMagick, I used a JPEG using a custom scan script. Combined with ImageMagick, I used a
command like: command like:
```bash {{<highlight bash>}}
convert input.png ppm:- | cjpeg -quality 95 -optimize -scans scan_script > output.jpg convert input.png ppm:- | cjpeg -quality 95 -optimize -scans scan_script > output.jpg
``` {{< / highlight >}}
Or if the input is already a JPEG, `jpegtran` will do the same Or if the input is already a JPEG, `jpegtran` will do the same
thing, losslessly (as it's merely reordering coefficients): thing, losslessly (as it's merely reordering coefficients):
```bash {{<highlight bash>}}
jpegtran -scans scan_script input.jpg > output.jpg jpegtran -scans scan_script input.jpg > output.jpg
``` {{< / highlight >}}
libjpeg has some interesting features as well. Rather than decoding an libjpeg has some interesting features as well. Rather than decoding an
entire full-resolution JPEG and then scaling it down, for instance (a entire full-resolution JPEG and then scaling it down, for instance (a
@ -197,7 +204,7 @@ decoding so that it will simply do the reduction for you while
decoding. This takes less time and uses less memory compared with decoding. This takes less time and uses less memory compared with
getting the full decompressed version and resampling afterward. getting the full decompressed version and resampling afterward.
The C code below (or [here](../images/obscure_jpeg_features) or this The C code below (or [here](./jpeg_split.c) or this
[gist](https://gist.github.com/9220146)), based loosely on `example.c` [gist](https://gist.github.com/9220146)), based loosely on `example.c`
from libjpeg, will split up a multi-scan JPEG into a series of from libjpeg, will split up a multi-scan JPEG into a series of
numbered PPM files, each one containing a scan. Look for numbered PPM files, each one containing a scan. Look for
@ -207,7 +214,7 @@ processes as much input JPEG as it needs for the next scan. (It needs
nothing special to build besides a functioning libjpeg. `gcc -ljpeg -o nothing special to build besides a functioning libjpeg. `gcc -ljpeg -o
jpeg_split.o jpeg_split.c` works for me.) jpeg_split.o jpeg_split.c` works for me.)
```c {{<highlight c>}}
// jpeg_split.c: Write each scan from a multi-scan/progressive JPEG. // jpeg_split.c: Write each scan from a multi-scan/progressive JPEG.
// This is based loosely on example.c from libjpeg, and should require only // This is based loosely on example.c from libjpeg, and should require only
// libjpeg as a dependency (e.g. gcc -ljpeg -o jpeg_split.o jpeg_split.c). // libjpeg as a dependency (e.g. gcc -ljpeg -o jpeg_split.o jpeg_split.c).
@ -342,20 +349,20 @@ void read_scan(struct jpeg_decompress_struct * cinfo,
jpeg_finish_output(cinfo); jpeg_finish_output(cinfo);
fclose(outfile); fclose(outfile);
} }
``` {{< / highlight >}}
Examples Examples
======== ========
Here are all 10 scans from a standard progressive JPEG, separated out with the example code: Here are all 10 scans from a standard progressive JPEG, separated out with the example code:
![Scan 1](../images//obscure_jpeg_features/cropphoto1.png) ![Scan 1](./cropphoto1.png)
![Scan 2](../images//obscure_jpeg_features/cropphoto2.png) ![Scan 2](./cropphoto2.png)
![Scan 3](../images//obscure_jpeg_features/cropphoto3.png) ![Scan 3](./cropphoto3.png)
![Scan 4](../images//obscure_jpeg_features/cropphoto4.png) ![Scan 4](./cropphoto4.png)
![Scan 5](../images//obscure_jpeg_features/cropphoto5.png) ![Scan 5](./cropphoto5.png)
![Scan 6](../images//obscure_jpeg_features/cropphoto6.png) ![Scan 6](./cropphoto6.png)
![Scan 7](../images//obscure_jpeg_features/cropphoto7.png) ![Scan 7](./cropphoto7.png)
![Scan 8](../images//obscure_jpeg_features/cropphoto8.png) ![Scan 8](./cropphoto8.png)
![Scan 9](../images//obscure_jpeg_features/cropphoto9.png) ![Scan 9](./cropphoto9.png)
![Scan 10](../images//obscure_jpeg_features/cropphoto10.png) ![Scan 10](./cropphoto10.png)

View File

@ -852,7 +852,7 @@ C & =M^\top M \\
D &= \left(M^\top U - (M^\top U)^\top\right) /\ \textrm{max}(1, M^\top M) D &= \left(M^\top U - (M^\top U)^\top\right) /\ \textrm{max}(1, M^\top M)
\end{align} \end{align}
$$ $$
</pre> </div>
where $/$ is Hadamard (i.e. elementwise) division, and $\textrm{max}$ is elementwise maximum with 1. Then, the below gives the prediction for how user $u$ will rate movie $j$: where $/$ is Hadamard (i.e. elementwise) division, and $\textrm{max}$ is elementwise maximum with 1. Then, the below gives the prediction for how user $u$ will rate movie $j$:
@ -860,7 +860,7 @@ where $/$ is Hadamard (i.e. elementwise) division, and $\textrm{max}$ is element
$$ $$
P(u)_j = \frac{[M_u \odot (C_j > 0)] \cdot (D_j + U_u) - U_{u,j}}{M_u \cdot (C_j > 0)} P(u)_j = \frac{[M_u \odot (C_j > 0)] \cdot (D_j + U_u) - U_{u,j}}{M_u \cdot (C_j > 0)}
$$ $$
</pre> </div>
$D_j$ and $C_j$ are row $j$ of $D$ and $C$, respectively. $M_u$ and $U_u$ are column $u$ of $M$ and $U$, respectively. $\odot$ is elementwise multiplication. $D_j$ and $C_j$ are row $j$ of $D$ and $C$, respectively. $M_u$ and $U_u$ are column $u$ of $M$ and $U$, respectively. $\odot$ is elementwise multiplication.
@ -891,7 +891,7 @@ S_{j,i}(\chi)} u_j - u_i = \frac{1}{card(S_{j,i}(\chi))}\left(\sum_{u
\in S_{j,i}(\chi)} u_j - \sum_{u \in S_{j,i}(\chi)} u_i\right) \in S_{j,i}(\chi)} u_j - \sum_{u \in S_{j,i}(\chi)} u_i\right)
\end{split} \end{split}
$$ $$
</pre> </div>
where: where:
@ -930,7 +930,7 @@ matrix multiplication:
<div> <div>
$$C=M^\top M$$ $$C=M^\top M$$
</pre> </div>
since $C\_{i,j}=card(S\_{j,i}(\chi))$ is the dot product of row $i$ of $M^T$ - which is column since $C\_{i,j}=card(S\_{j,i}(\chi))$ is the dot product of row $i$ of $M^T$ - which is column
$i$ of $M$ - and column $j$ of $M$. $i$ of $M$ - and column $j$ of $M$.
@ -940,7 +940,7 @@ We still need the other half:
<div> <div>
$$\sum_{u \in S_{j,i}(\chi)} u_j - \sum_{u \in S_{j,i}(\chi)} u_i$$ $$\sum_{u \in S_{j,i}(\chi)} u_j - \sum_{u \in S_{j,i}(\chi)} u_i$$
</pre> </div>
We can apply a similar trick here. Consider first what $\sum\_{u \in We can apply a similar trick here. Consider first what $\sum\_{u \in
S\_{j,i}(\chi)} u\_j$ means: It is the sum of only those ratings of S\_{j,i}(\chi)} u\_j$ means: It is the sum of only those ratings of
@ -958,7 +958,7 @@ $M\_j$ (consider the definition of $M\_j$) computes this, and so:
<div> <div>
$$\sum_{u \in S_{j,i}(\chi)} u_j = M_i \cdot U_j$$ $$\sum_{u \in S_{j,i}(\chi)} u_j = M_i \cdot U_j$$
</pre> </div>
and as with $C$, since we want every pairwise dot product, this summation just and as with $C$, since we want every pairwise dot product, this summation just
equals element $(i,j)$ of $M^\top U$. The other half of the summation, equals element $(i,j)$ of $M^\top U$. The other half of the summation,
@ -967,13 +967,13 @@ the transpose of this matrix:
<div> <div>
$$\sum_{u \in S_{j,i}(\chi)} u_j - \sum_{u \in S_{j,i}(\chi)} u_i = M^\top U - (M^\top U)^\top = M^\top U - U^\top M$$ $$\sum_{u \in S_{j,i}(\chi)} u_j - \sum_{u \in S_{j,i}(\chi)} u_i = M^\top U - (M^\top U)^\top = M^\top U - U^\top M$$
</pre> </div>
So, finally, we can compute an entire deviation matrix at once like: So, finally, we can compute an entire deviation matrix at once like:
<div> <div>
$$D = \left(M^\top U - (M^\top U)^\top\right) /\ M^\top M$$ $$D = \left(M^\top U - (M^\top U)^\top\right) /\ M^\top M$$
</pre> </div>
where $/$ is Hadamard (i.e. elementwise) division, and $D\_{j,i} = \textrm{dev}\_{j,i}$. where $/$ is Hadamard (i.e. elementwise) division, and $D\_{j,i} = \textrm{dev}\_{j,i}$.
@ -987,7 +987,7 @@ Finally, the paper gives the formula to predict how user $u$ will rate movie $j$
$$ $$
P(u)_j = \frac{1}{card(R_j)}\sum_{i\in R_j} \left(\textrm{dev}_{j,i}+u_i\right) = \frac{1}{card(R_j)}\sum_{i\in R_j} \left(D_{j,i} + U_{u,j} \right) P(u)_j = \frac{1}{card(R_j)}\sum_{i\in R_j} \left(\textrm{dev}_{j,i}+u_i\right) = \frac{1}{card(R_j)}\sum_{i\in R_j} \left(D_{j,i} + U_{u,j} \right)
$$ $$
</pre> </div>
where $R\_j = \{i | i \in S(u), i \ne j, card(S\_{j,i}(\chi)) > 0\}$, and $S(u)$ is the set of movies that user $u$ has rated. To unpack the paper's somewhat dense notation, the summation is over every movie $i$ that user $u$ rated and that at least one other user rated, except movie $j$. where $R\_j = \{i | i \in S(u), i \ne j, card(S\_{j,i}(\chi)) > 0\}$, and $S(u)$ is the set of movies that user $u$ has rated. To unpack the paper's somewhat dense notation, the summation is over every movie $i$ that user $u$ rated and that at least one other user rated, except movie $j$.
@ -995,7 +995,7 @@ We can apply the usual trick yet one more time with a little effort. The summati
<div> <div>
$$P(u)_j = \frac{[M_u \odot (C_j > 0)] \cdot (D_j + U_u) - U_{u,j}}{M_u \cdot (C_j > 0)}$$ $$P(u)_j = \frac{[M_u \odot (C_j > 0)] \cdot (D_j + U_u) - U_{u,j}}{M_u \cdot (C_j > 0)}$$
</pre> </div>
#### 5.2.2.4. Approximation #### 5.2.2.4. Approximation
@ -1003,7 +1003,7 @@ The paper also gives a formula that is a suitable approximation for larger data
<div> <div>
$$p^{S1}(u)_j = \bar{u} + \frac{1}{card(R_j)}\sum_{i\in R_j} \textrm{dev}_{j,i}$$ $$p^{S1}(u)_j = \bar{u} + \frac{1}{card(R_j)}\sum_{i\in R_j} \textrm{dev}_{j,i}$$
</pre> </div>
where $\bar{u}$ is user $u$'s average rating. This doesn't change the formula much; we can compute $\bar{u}$ simply as column means of $U$. where $\bar{u}$ is user $u$'s average rating. This doesn't change the formula much; we can compute $\bar{u}$ simply as column means of $U$.
@ -1169,7 +1169,7 @@ In that sense, $P$ and $Q$ give us a model in which ratings are an interaction b
<div> <div>
$$\hat{r}_{ui}=q_i^\top p_u$$ $$\hat{r}_{ui}=q_i^\top p_u$$
</pre> </div>
However, some things aren't really interactions. Some movies are just (per the ratings) overall better or worse. Some users just tend to rate everything higher or lower. We need some sort of bias built into the model to comprehend this. However, some things aren't really interactions. Some movies are just (per the ratings) overall better or worse. Some users just tend to rate everything higher or lower. We need some sort of bias built into the model to comprehend this.
@ -1177,7 +1177,7 @@ Let's call $b_i$ the bias for movie $i$, $b_u$ the bias for user $u$, and $\mu$
<div> <div>
$$\hat{r}_{ui}=\mu + b_i + b_u + q_i^\top p_u$$ $$\hat{r}_{ui}=\mu + b_i + b_u + q_i^\top p_u$$
</pre> </div>
This is the basic model we'll implement, and the same one described in the references at the top. This is the basic model we'll implement, and the same one described in the references at the top.
@ -1187,7 +1187,7 @@ More formally, the prediction model is:
<div> <div>
$$\hat{r}_{ui}=\mu + b_i + b_u + q_i^\top p_u$$ $$\hat{r}_{ui}=\mu + b_i + b_u + q_i^\top p_u$$
</pre> </div>
where: where:
@ -1215,7 +1215,7 @@ $$
\frac{\partial E}{\partial b_i} &= 2 \sum_{r_{ui}} \left(\lambda b_i + r_{ui} - \hat{r}_{ui}\right) \frac{\partial E}{\partial b_i} &= 2 \sum_{r_{ui}} \left(\lambda b_i + r_{ui} - \hat{r}_{ui}\right)
\end{split} \end{split}
$$ $$
</pre> </div>
Gradient with respect to $p_u$ proceeds similarly: Gradient with respect to $p_u$ proceeds similarly:
@ -1229,7 +1229,7 @@ p_u}q_i^\top p_u \right) + 2 \lambda p_u \\
\frac{\partial E}{\partial p_u} &= 2 \sum_{r_{ui}} \lambda p_u - \left(r_{ui} - \hat{r}_{ui}\right)q_i^\top \frac{\partial E}{\partial p_u} &= 2 \sum_{r_{ui}} \lambda p_u - \left(r_{ui} - \hat{r}_{ui}\right)q_i^\top
\end{split} \end{split}
$$ $$
</pre> </div>
Gradient with respect to $b\_u$ is identical form to $b\_i$, and gradient with respect to $q\_i$ is identical form to $p\_u$, except that the variables switch places. The full gradients then have the standard form for gradient descent, i.e. a summation of a gradient term for each individual data point, so they turn easily into update rules for each parameter (which match the ones in the Surprise link) after absorbing the leading 2 into learning rate $\gamma$ and separating out the summation over each data point. That's given below, with $e\_{ui}=r\_{ui} - \hat{r}\_{ui}$: Gradient with respect to $b\_u$ is identical form to $b\_i$, and gradient with respect to $q\_i$ is identical form to $p\_u$, except that the variables switch places. The full gradients then have the standard form for gradient descent, i.e. a summation of a gradient term for each individual data point, so they turn easily into update rules for each parameter (which match the ones in the Surprise link) after absorbing the leading 2 into learning rate $\gamma$ and separating out the summation over each data point. That's given below, with $e\_{ui}=r\_{ui} - \hat{r}\_{ui}$:
@ -1242,7 +1242,7 @@ $$
\frac{\partial E}{\partial q_i} &= 2 \sum_{r_{ui}} \lambda q_i - e_{ui}p_u^\top\ \ \ &\longrightarrow q_i' &= q_i - \gamma\frac{\partial E}{\partial q_i} &= q_i + \gamma\left(e_{ui}p_u - \lambda q_i \right) \\ \frac{\partial E}{\partial q_i} &= 2 \sum_{r_{ui}} \lambda q_i - e_{ui}p_u^\top\ \ \ &\longrightarrow q_i' &= q_i - \gamma\frac{\partial E}{\partial q_i} &= q_i + \gamma\left(e_{ui}p_u - \lambda q_i \right) \\
\end{split} \end{split}
$$ $$
</pre> </div>
The code below is a direct implementation of this by simply iteratively applying the above equations for each data point - in other words, stochastic gradient descent. The code below is a direct implementation of this by simply iteratively applying the above equations for each data point - in other words, stochastic gradient descent.

View File

@ -20,20 +20,21 @@ compilation, namespaces, multiple return values, packages, a mostly
sane build system, no C preprocessor, *minimal* object-oriented sane build system, no C preprocessor, *minimal* object-oriented
support, interfaces, anonymous functions, and closures. Those aren't support, interfaces, anonymous functions, and closures. Those aren't
trivialities; they're all rather great things. They're all missing in trivialities; they're all rather great things. They're all missing in
C and C++ (for the most part). They're all such common problems that C and C++ (for the most part - excluding that C++11 has started
nearly every "practical" C/C++ project uses a lot of ad-hoc solutions incorporating some). They're all such common problems that nearly
sitting both inside and outside the language - libraries, abuse of every "practical" C/C++ project uses a lot of ad-hoc solutions sitting
macros, more extensive code generation, lots of tooling, and a whole both inside and outside the language - libraries, abuse of macros,
lot of "best practices" slavishly followed - to try to solve them. more extensive code generation, lots of tooling, and a whole lot of
(No, I don't want to hear about how this lack of very basic features "best practices" slavishly followed - to try to solve them. (No, I
is actually a feature. No, I don't want to hear about how don't want to hear about how this lack of very basic features is
painstakingly fucking around with pointers is the hairshirt that we actually a feature. No, I don't want to hear about how painstakingly
all must wear if we wish for our software to achieve a greater state fucking around with pointers is the hairshirt that we all must wear if
of piety than is accessible to high-level languages. No, I don't want we wish for our software to achieve a greater state of piety than is
to hear about how ~$arbitrary_abstraction_level~ is the level that accessible to high-level languages. No, I don't want to hear about
*real* programmers work at, any programmer who works above that level how ~$arbitrary_abstraction_level~ is the level that *real*
is a loser, and any programmer who works below that level might as programmers work at, any programmer who works above that level is a
well be building toasters. Shut up.) loser, and any programmer who works below that level might as well be
building toasters. Shut up.)
I'm a functional programming nerd. I just happen to also have a lot of I'm a functional programming nerd. I just happen to also have a lot of
experience being knee-deep in C and C++ code. I'm looking at Go from experience being knee-deep in C and C++ code. I'm looking at Go from
@ -53,13 +54,12 @@ less transparently.
Concurrency was made a central aim in this language. If you've not Concurrency was made a central aim in this language. If you've not
watched Rob Pike's [[https://blog.golang.org/concurrency-is-not-parallelism][Concurrency is not parallelism]] talk, go do it now. watched Rob Pike's [[https://blog.golang.org/concurrency-is-not-parallelism][Concurrency is not parallelism]] talk, go do it now.
While it's perhaps not my favorite approach to concurrency. While I While I may not be a fan of the style of concurrency that it uses
may not be a fan of the style of concurrency that it uses (based on (based on [[https://en.wikipedia.org/wiki/Communicating_sequential_processes][CSP]] rather than the more Erlang-ian message passing), this
[[https://en.wikipedia.org/wiki/Communicating_sequential_processes][CSP]] rather than the more Erlang-ian message passing), this is still a is still a far superior style to the very popular concurrency paradigm
far superior style to the very popular concurrency paradigm of of Concurrency Is Easy, We'll Just Ignore It Now and Duct-Tape the
Concurrency Is Easy, We'll Just Ignore It Now and Duct-Tape the Support On Later, How Hard Could It Possibly Be. [[http://jordanorelli.com/post/31533769172/why-i-went-from-python-to-go-and-not-nodejs][Why I went from
Support On Later. [[http://jordanorelli.com/post/31533769172/why-i-went-from-python-to-go-and-not-nodejs][Why I went from Python to Go (and not node.js)]], in Python to Go (and not node.js)]], in my opinion, is spot-on.
my opinion, is spot-on.
Many packages are available for it, and from all I've seen, they are Many packages are available for it, and from all I've seen, they are
sensible packages - not [[https://www.reddit.com/r/programming/comments/4bjss2/an_11_line_npm_package_called_leftpad_with_only/][leftpad]]-style idiocy. I'm sure that if I look sensible packages - not [[https://www.reddit.com/r/programming/comments/4bjss2/an_11_line_npm_package_called_leftpad_with_only/][leftpad]]-style idiocy. I'm sure that if I look
@ -92,7 +92,11 @@ system is also still very limited - particularly, things like the lack
of any parametric polymorphism. I'd probably prefer something more of any parametric polymorphism. I'd probably prefer something more
like in [[https://www.rust-lang.org][Rust]]. I know this was largely intentional as well: Go was like in [[https://www.rust-lang.org][Rust]]. I know this was largely intentional as well: Go was
designed for people who don't want a more powerful type system, but do designed for people who don't want a more powerful type system, but do
want types. want types, and further, to support this kind of polymorphism involves
tradeoffs it looks like they were avoiding, like those Russ Cox gives
in [[https://research.swtch.com/generic][The Generic Dilemma]]. (Later note: the [[https://github.com/golang/proposal/blob/master/design/go2draft-contracts.md][Contracts - Draft Design]]
proposal for Go 2 offers a possible approach for parametric
polymorphism.)
My objections aren't unique. [[https://www.teamten.com/lawrence/writings/why-i-dont-like-go.html][Ten Reasons Why I Don't Like Golang]] and My objections aren't unique. [[https://www.teamten.com/lawrence/writings/why-i-dont-like-go.html][Ten Reasons Why I Don't Like Golang]] and
[[http://yager.io/programming/go.html][Why Go Is Not Good]] have criticisms I can't really disagree with. [[http://yager.io/programming/go.html][Why Go Is Not Good]] have criticisms I can't really disagree with.

@ -0,0 +1 @@
Subproject commit e479cd6fc378e0c236dac90e0a10360c232927a5