<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Insane on Unnamed Website</title><link>https://unnamed.website/tags/insane/</link><description>Recent content in Insane on Unnamed Website</description><generator>Hugo</generator><language>en-us</language><managingEditor>Anthony Wang</managingEditor><webMaster>Anthony Wang</webMaster><lastBuildDate>Sat, 15 Mar 2025 21:30:06 -0400</lastBuildDate><atom:link href="https://unnamed.website/tags/insane/index.xml" rel="self" type="application/rss+xml"/><item><title>Formally Verifying Fenwick Trees</title><link>https://unnamed.website/posts/formally-verifying-fenwick-trees/</link><pubDate>Sat, 15 Mar 2025 21:30:06 -0400</pubDate><author>Anthony Wang</author><guid>https://unnamed.website/posts/formally-verifying-fenwick-trees/</guid><description>&lt;style&gt;
.chardiv {
	float: left;
	width: 100px;
}
.charimg {
	height: 60px;
}
blockquote {
	min-height: 60px;
}
&lt;/style&gt;
&lt;div class="chardiv"&gt;
	&lt;img src="https://unnamed.website/img/char/kublai.png" class="charimg"&gt;
&lt;/div&gt;
&lt;blockquote&gt;&lt;strong&gt;Kublai&lt;/strong&gt;:
Hey, it&amp;rsquo;s you again! That formal verification thing you mentioned last time sucks!&lt;/blockquote&gt;

&lt;p&gt;Huh? You mean our &lt;a href="https://unnamed.website/posts/i-can-prove-it-can-sort/"&gt;proof of the ICan&amp;rsquo;tBelieveItCanSort algorithm&lt;/a&gt;?&lt;/p&gt;
&lt;style&gt;
.chardiv {
	float: left;
	width: 100px;
}
.charimg {
	height: 60px;
}
blockquote {
	min-height: 60px;
}
&lt;/style&gt;
&lt;div class="chardiv"&gt;
	&lt;img src="https://unnamed.website/img/char/kublai.png" class="charimg"&gt;
&lt;/div&gt;
&lt;blockquote&gt;&lt;strong&gt;Kublai&lt;/strong&gt;:
Yeah! ICan&amp;rsquo;tBelieveItCanSort? More like ICanBelieveItObviouslyCanSort! After watching that visualization a few times, it intuitively makes so much sense. It&amp;rsquo;s trivial.&lt;/blockquote&gt;

&lt;p&gt;Trivial? I&amp;rsquo;m banning that word. If you consider something to be trivial, you probably haven&amp;rsquo;t pondered it deeply enough.&lt;/p&gt;</description></item><item><title>Solving Shortest Paths With Transformers</title><link>https://unnamed.website/posts/solving-shortest-paths-with-transformers/</link><pubDate>Wed, 11 Dec 2024 11:29:07 -0500</pubDate><author>Anthony Wang</author><guid>https://unnamed.website/posts/solving-shortest-paths-with-transformers/</guid><description>&lt;p&gt;&lt;link rel="stylesheet" href="https://unnamed.website/katex/katex.min.css" crossorigin="anonymous"&gt;
&lt;script defer src="https://unnamed.website/katex/katex.min.js" crossorigin="anonymous"&gt;&lt;/script&gt;
&lt;script defer src="https://unnamed.website/katex/contrib/auto-render.min.js" crossorigin="anonymous" onload="renderMathInElement(document.body, {delimiters: [{left: '$', right: '$', display: false}, {left: '\\(', right: '\\)', display: false}, {left: '\\[', right: '\\]', display: true}, {left: '\\begin{equation}', right: '\\end{equation}', display: true}, {left: '\\begin{equation*}', right: '\\end{equation*}', display: true}, {left: '\\begin{align}', right: '\\end{align}', display: true}, {left: '\\begin{align*}', right: '\\end{align*}', display: true}]});"&gt;&lt;/script&gt;

&lt;style&gt;
 
 
.mpld3-staticpaths:has(:nth-child(10)) {
 transform: rotate(180deg) translate(-520px,-423px);
}
.mpld3-figure {
 display: block;
 margin: auto;
}
&lt;/style&gt;
&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;h3 id="motivation"&gt;Motivation&lt;/h3&gt;
&lt;p&gt;Neural networks are capable of impressive feats of off-distribution generalization. For instance, we discussed in class a program trained to convert sketches of cats into realistic looking pictures of cats that was able to draw a cat with three eyes given a sketch with three eyes, even though there were no 3-eyed cats in the training data. However, neural networks also often learn non-robust features that cause them to perform poorly off-distribution (e.g., adversarial examples for an image classifier). In this project, we will investigate the question of when transformers generalize off-distribution via a case study on a simple synthetic task. More specifically, the goal of our project is to make progress towards answering the following question:&lt;/p&gt;</description></item></channel></rss>