<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Alek on Unnamed Website</title><link>https://unnamed.website/tags/alek/</link><description>Recent content in Alek on Unnamed Website</description><generator>Hugo</generator><language>en-us</language><managingEditor>Anthony Wang</managingEditor><webMaster>Anthony Wang</webMaster><lastBuildDate>Wed, 12 Feb 2025 22:03:44 -0500</lastBuildDate><atom:link href="https://unnamed.website/tags/alek/index.xml" rel="self" type="application/rss+xml"/><item><title>ASI Self-Hacking Argument</title><link>https://unnamed.website/posts/asi-self-hacking/</link><pubDate>Wed, 12 Feb 2025 22:03:44 -0500</pubDate><author>Anthony Wang</author><guid>https://unnamed.website/posts/asi-self-hacking/</guid><description>&lt;p&gt;Recently my friend Alek has been thinking a lot about ASI and existential risk, and even though I don&amp;rsquo;t believe &lt;a href="https://awestover.github.io/thoughts/AI-xrisk"&gt;his central claim&lt;/a&gt; at all, it&amp;rsquo;s still been interesting to discuss this stuff. Here&amp;rsquo;s a weird argument I came up with for why one specific theoretical form of ASI might not be harmful at all and actually quite useless. I&amp;rsquo;m probably not the first person to come up with this, so I&amp;rsquo;d like to know if there are any articles out there with similar arguments. Anyways, thanks to Alek for helping me fix some flaws and refine this argument!&lt;/p&gt;</description></item><item><title>The End</title><link>https://unnamed.website/posts/end/</link><pubDate>Mon, 03 Jun 2024 19:33:48 -0500</pubDate><author>Anthony Wang</author><guid>https://unnamed.website/posts/end/</guid><description>&lt;link rel="stylesheet" href="https://unnamed.website/katex/katex.min.css" crossorigin="anonymous"&gt;
&lt;script defer src="https://unnamed.website/katex/katex.min.js" crossorigin="anonymous"&gt;&lt;/script&gt;
&lt;script defer src="https://unnamed.website/katex/contrib/auto-render.min.js" crossorigin="anonymous" onload="renderMathInElement(document.body, {delimiters: [{left: '$', right: '$', display: false}, {left: '\\(', right: '\\)', display: false}, {left: '\\[', right: '\\]', display: true}, {left: '\\begin{equation}', right: '\\end{equation}', display: true}, {left: '\\begin{equation*}', right: '\\end{equation*}', display: true}, {left: '\\begin{align}', right: '\\end{align}', display: true}, {left: '\\begin{align*}', right: '\\end{align*}', display: true}]});"&gt;&lt;/script&gt;

&lt;p&gt;No, this post isn&amp;rsquo;t the end of my blog. But it is the end of a long saga that began 123 days ago&amp;hellip;&lt;/p&gt;
&lt;p&gt;A while back, my friend Alek posted a &lt;a href="https://awestover.github.io/skyspace/posts/misc/02-25-24.html"&gt;story on his blog&lt;/a&gt;. A week later, I wanted to write a parody of it in the exact same style. And so I came up with the idea of using only words from Alek&amp;rsquo;s original story, where each word in my story appears no more times than it appears in Alek&amp;rsquo;s story. And thus, the short story series was born! Some of my friends helped continue the chain, where story $i$ only uses words from story $i-1$, counting multiplicity. We also considered making a chain where story $i$ only uses words, counting multiplicity, that were in story $i-2$ but not story $i-1$, so basically the words left over from story $i-1$. It&amp;rsquo;s kind of like the Euclidean algorithm but more cursed. I guess this would converge on a small set of words that can&amp;rsquo;t form a sentence, like a bunch of copies of &amp;ldquo;the&amp;rdquo;.&lt;/p&gt;</description></item></channel></rss>