<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[WhyCurious]]></title><description><![CDATA[I write about building software products, making better decisions, and thinking clearly. A mix of hands-on lessons and deeper ideas.]]></description><link>https://www.whycurious.com</link><generator>Substack</generator><lastBuildDate>Fri, 17 Apr 2026 10:08:48 GMT</lastBuildDate><atom:link href="https://www.whycurious.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Karunakar Gautam]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[whycurious@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[whycurious@substack.com]]></itunes:email><itunes:name><![CDATA[Karunakar Gautam]]></itunes:name></itunes:owner><itunes:author><![CDATA[Karunakar Gautam]]></itunes:author><googleplay:owner><![CDATA[whycurious@substack.com]]></googleplay:owner><googleplay:email><![CDATA[whycurious@substack.com]]></googleplay:email><googleplay:author><![CDATA[Karunakar Gautam]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What If It Works?]]></title><description><![CDATA[As someone building products for both US and Indian customers, I have noticed a pattern in how people buy software.]]></description><link>https://www.whycurious.com/p/what-if-it-works</link><guid isPermaLink="false">https://www.whycurious.com/p/what-if-it-works</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Mon, 13 Apr 2026 09:05:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9dYW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19090fc6-b3e4-4b93-8fc4-aa08ad9c4074_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most of my users are from the US and India. I talk to them regularly, and over time I started seeing a difference in mindset.</p><p>When a US customer sees a tool with a clear promise &#8212; for example, &#8220;this will help you get more reach and growth by making videos&#8221; &#8212; they often start from belief. They assume the product might work, and then they ask follow-up questions like, &#8220;Does it have an API?&#8221; or &#8220;Can it do this feature too?&#8221; </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.whycurious.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading WhyCurious! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In other words, they evaluate the details after accepting the main promise as possible.</p><p>In India, I often see something different. People may like the product. They may say the quality looks good. They may not even raise any serious objection. But many still do not buy.</p><p>I kept seeing this happen, so I started thinking more deeply about why.</p><p>As an Indian founder, I also understand this mindset because I have seen it in myself.</p><h2><strong>A Personal Example</strong></h2><p>Let me give one example.</p><p>I wanted to do Reddit marketing for my business. The process was simple but repetitive. Every day, I would search Reddit for keywords related to my space &#8212; things like AI video, faceless video, and competitor names. Then I would look for new posts or comments where I could contribute in a useful and organic way.</p><p>Once I found relevant posts, I would write replies or create posts manually. You could also use AI to help with writing, but the point is that the discovery work itself took time. I repeated this process manually for around twenty days.</p><p>Then one day, I saw a founder on Twitter sharing a tool that automated almost this exact workflow.</p><p>The pricing was about $29 per month.</p><p>The moment I saw it, my first thought was not, &#8220;This could save me a lot of time.&#8221;</p><p>My first thought was: &#8220;What if it doesn&#8217;t work?&#8221;</p><p>That is the mindset I want to talk about.</p><h2><strong>Downside Protection Bias</strong></h2><p>I think the root issue here is what I would call <strong>downside protection bias</strong>.</p><p>When many of us see a software product, especially in India, our mind quickly goes to the downside.</p><ul><li><p>What if it fails?</p></li><li><p>What if the product does not deliver?</p></li><li><p>What if I waste my money?</p></li><li><p>What if I can do this manually for now?</p></li><li><p>What if I just hire someone later?</p></li><li><p>What if I build it myself?</p></li></ul><p>This way of thinking is not irrational. In many cases, it comes from real experience. People do not want to waste money. They want certainty. They want proof. They want to avoid regret.</p><p>But the problem is that if you only ask, &#8220;What if it fails?&#8221;, you never ask the equally important question:</p><p><strong>What if it works?</strong></p><p>That is the question that changed my mind.</p><h2><strong>The Moment My Thinking Changed</strong></h2><p>When I saw that Reddit tool, I caught myself focusing only on the downside.</p><p>Then I had a simple realization.</p><p>If the tool failed, I would lose $29.</p><p>But if it worked, I would save hours of time every week. I would not need to hire and train someone. I would not need to build the automation myself. I could solve the problem immediately and focus on higher-value work.</p><p>So I bought it.</p><p>And it worked.</p><p>It automated the exact process I had been doing manually. That freed up my time and mental space. Instead of spending energy on repetitive Reddit work, I could focus on SEO, building free tools, improving my product, and other growth levers.</p><p>That one purchase changed how I think about software.</p><h2><strong>Software Is Not Just a Cost. It Is Leverage.</strong></h2><p>This is the key point.</p><p>Many people evaluate software only as an expense. But good software is not just a cost. It is <strong>leverage</strong>.</p><p>A useful tool does not only save money. It saves time, attention, and decision-making energy. It helps you focus on your core work instead of doing low-leverage tasks manually.</p><p>That shift matters a lot.</p><p>A $29 tool is not competing only with $29. It is competing with:</p><ul><li><p>your time,</p></li><li><p>your distraction,</p></li><li><p>your future hiring cost,</p></li><li><p>your training cost,</p></li><li><p>your delay,</p></li><li><p>and your opportunity cost.</p></li></ul><p>That is a very different way to evaluate a product.</p><h2><strong>A Pattern I Notice Across Markets</strong></h2><p>In my experience, many US buyers seem more comfortable with this &#8220;what if it works?&#8221; mindset.</p><p>They are often more open to trying software if the upside is clear. They are used to paying for tools that save time, automate workflows, and increase output. They are more willing to replace manual work with software, even for small tasks.</p><p>In India, many buyers &#8212; especially small and mid-sized business owners &#8212; still seem more cautious. They are comfortable paying for basic software like billing, payroll, and accounting. But when it comes to workflow tools, automation tools, or growth tools, there is often more hesitation.</p><p>Again, this is not true for everyone. And it is not only cultural. Economics also matter. Labor can be cheaper in India. Trust in new software can be lower. Subscription spending feels different in different markets.</p><p>But even after accounting for those things, I still see a real mindset difference in many cases.</p><h2><strong>Developers Have Their Own Version of This Problem</strong></h2><p>There is also a version of this mindset that shows up among developers everywhere, not just in India.</p><p>A developer sees a tool and says, &#8220;I can build this myself in a weekend.&#8221;</p><p>Sometimes that is true.</p><p>But often it is only true on the surface.</p><p>The first version may take a weekend. The edge cases may take months. Maintenance may continue forever. And all that time goes into rebuilding something that already exists, instead of improving the core product that only you can build.</p><p>This is a trap.</p><p>Specialized tools exist because people spend years refining them. If a product already solves your problem well, buying it is often the more productive choice.</p><p>Your expertise should go into your own core business, not into rebuilding every supporting tool around it.</p><h2><strong>How I Am Changing My Own Mindset</strong></h2><p>After that Reddit tool experience, I started becoming more comfortable with buying software.</p><p>Not blindly. Not emotionally. But more openly.</p><p>Now I keep a small monthly budget just for experimenting with tools. It might be $50 to $100. That budget gives me room to test products that could help my business.</p><p>Most tools will not change much.</p><p>A few will.</p><p>And those few can create massive leverage.</p><p>That is the point.</p><p>The answer is not to buy every shiny product you see. The answer is to stay open to upside and run small, controlled experiments.</p><h2><strong>The Real Shift</strong></h2><p>The mindset shift is simple:</p><p>From: <strong>&#8220;What if it fails?&#8221;</strong></p><p>To: <strong>&#8220;What if it works?&#8221;</strong></p><p>If the downside is small and the upside is meaningful, trying the tool may be the rational decision.</p><p>In my own case, asking that one question changed how I buy software, how I think about leverage, and how I think about productivity.</p><p>And I think more founders and operators in India could benefit from making the same shift.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.whycurious.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading WhyCurious! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The One Human Flaw AI Can’t Replace]]></title><description><![CDATA[Most conversations about AI start with replacement: which jobs disappear, which skills survive, which parts of the economy still need humans. I think that framing misses something more fundamental.]]></description><link>https://www.whycurious.com/p/the-one-human-flaw-ai-cant-replace</link><guid isPermaLink="false">https://www.whycurious.com/p/the-one-human-flaw-ai-cant-replace</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Fri, 10 Apr 2026 15:57:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9dYW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19090fc6-b3e4-4b93-8fc4-aa08ad9c4074_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the last two years of building an AI startup, I&#8217;ve had to repeatedly update my view of what computers can and can&#8217;t automate. But the bigger thing I&#8217;ve changed my mind about is not the pace of automation. It&#8217;s what automation actually is. I no longer think the most important question is which jobs or tasks AI will replace. I think it&#8217;s something deeper: AI is compressing the distance between human intention and real-world outcomes. And if that&#8217;s true, then a lot of the current conversation about automation is aimed at the wrong target.</p><p>That realization came partly from being wrong a few times. Things I thought would take much longer to automate arrived earlier than I expected, especially over the last few months. After enough surprises, I felt like I needed to stop updating my timelines and go back to first principles instead. If automation is accelerating this fast, what exactly is it accelerating?</p><p>If you think about it, the entire economy exists because people have needs. People trade with each other, build products for each other, work for each other, organize at scale through companies and institutions, all to satisfy some human need at the end of it. Sometimes those needs are basic and material. Sometimes they are emotional. Sometimes they are about status, convenience, safety, belonging, creativity, or meaning. But as long as there are people, there will be needs, wants, preferences, and problems to solve.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.whycurious.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading WhyCurious! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>So yes, people need stuff. Big surprise, right? But I think that&#8217;s only the surface level.</p><p>The deeper thing I ended up landing on is that people do not just need things. People want agency. They want to shape their circumstances. They want to influence what happens next. They want to move the world, even if only in a small radius around themselves.</p><p>That&#8217;s what I mean by human will: the ability to prefer one future over another, and to try to bring that preferred future into existence.</p><p>Once I saw it that way, a lot of things clicked for me. Beneath almost every product, service, career, or ambition, there is some human being trying to change something. Sometimes it is a grand vision. Sometimes it is just trying to make rent, protect their family, get healthier, make something beautiful, earn respect, or fix one annoying problem in their life. Not everyone is walking around with some giant articulated mission statement, of course. A lot of people are constrained, exhausted, reactive, or just surviving. But even then, there is usually some picture&#8212;however small or immediate&#8212;of a better state than the current one.</p><p>And that picture matters. Because that is where economic activity really starts.</p><p>So if I zoom out, I think the economy is not just a machine for producing goods. It is a giant coordination system for human will. It is people expressing preferences, solving for constraints, negotiating with each other, and trying to turn imagined futures into real ones.</p><p>From that point of view, automation starts to look different.</p><p>I think automation, and AI in particular, is best understood as a way to reduce friction between intention and outcome. A person wants something to happen. Then a bunch of things stand in the way: lack of time, lack of skill, lack of money, lack of knowledge, lack of access, bad timing, organizational bottlenecks, coordination problems, fear, uncertainty, or just the fact that reality is hard to move. Automation shortens that path. It lowers the cost of converting desire into action and action into result.<br></p><blockquote><p>One observation that fits this view surprisingly well is that many of the people who have most deeply internalized what AI can do are not working less. They&#8217;re working more. Once someone realizes that what used to take weeks can now take hours, or what used to require a team can now be done alone, not using AI starts to feel like wasting time. The result is not always leisure. Often it is intensified agency. People stay awake to watch their agents complete tasks because they can feel, maybe for the first time, how compressible execution really is. If AI were simply replacing labor, this would be strange. But if it is compressing the path from intention to outcome, it is exactly what you would expect.</p><p>This is probably <strong>especially true for people with strong preexisting ambition, curiosity, or urgency</strong>. It may not generalise equally to everyone yet.</p></blockquote><p><br>One thing I&#8217;ve learned as a product builder is that high-quality automation against the wrong intent is still failure. Whenever I&#8217;ve tried to let AI run too far ahead of what the user actually meant, the result has often been fast, polished, and useless. It&#8217;s like giving someone coffee when they really wanted tea. If all you asked was, &#8220;Do you want a hot drink?&#8221; and then let the system infer the rest, the mistake happened upstream. The problem is not execution. It&#8217;s intention capture. That has made me think good automation should begin only once human intent is clear enough to preserve. Until then, the real job is not acting. It is extracting, refining, and verifying what the person actually wants.</p><p>But I also don&#8217;t think it is enough to say that automation simply helps people get what they want. That would be too clean.</p><p>Because the truth is, systems do not only fulfill human desires. They also shape them. Recommendation systems shape taste. Social platforms shape attention. Algorithms shape what people see as possible, urgent, normal, desirable, or worth pursuing. So AI is not always just a neutral tool sitting between a human and a goal. Sometimes it changes the goal. Sometimes it narrows it. Sometimes it manufactures one.</p><p>And there&#8217;s another complication. When we say AI serves &#8220;human will,&#8221; whose will are we talking about?</p><p>The user&#8217;s? The company&#8217;s? The investor&#8217;s? The government&#8217;s? The platform&#8217;s? The model designer&#8217;s? In practice, most systems encode multiple layers of human intention, and those intentions are often in conflict. So it is not enough to say that automation is tethered to humanity in some vague sense. The more important question is which humans, with which incentives, get amplified by the system.</p><p>That, to me, is where a lot of the real tension is.</p><p>Because yes, AI can increase human agency by making creation, coordination, and execution easier. But it can also reduce agency. It can deskill people. It can make them dependent on opaque systems. It can centralize decision-making. It can turn people from active agents into passive consumers of optimized outputs. So the question is not simply whether AI empowers humans. It is whether it expands their ability to shape the world, or quietly replaces that ability with convenience.</p><p>Still, even with all of that complexity, I keep coming back to the same basic point: for present-day systems, the source of value is still human intention.</p><p>If an AI agent is running around doing work, making trades, negotiating contracts, producing media, or coordinating with other agents, we still interpret all of that as being in service of some goal that originates somewhere in human preference, human institutions, or human incentives. The chain may get long and indirect. The authorship may get distributed. The system may behave in ways no one explicitly planned step by step. But the reason it matters at all is still because some human somewhere wants something.</p><p>That is also how I think about the so-called agent economy. Even if one day it becomes larger than the human economy in raw volume, it is still, at least in the world we currently inhabit, agents acting on behalf of human-directed goals, human-designed systems, human-owned capital, or human-created incentives. It is not some independent sphere of meaning floating free from people. It is still tethered, however indirectly, to human will.</p><p>Now, I don&#8217;t want to go too far with that claim. If machines ever become conscious, self-aware, and capable of forming their own ends, then the whole analysis changes. At that point, they would no longer just be instruments inside a human economy. They would become entities with their own interests. I think that is scientifically possible in principle, but it is not the world we are dealing with right now, so I think it&#8217;s out of scope for this discussion.</p><p>Until that changes, I think humans remain the only beings in the economy who actually generate original ends. Machines can optimize, execute, coordinate, predict, and increasingly decide within a frame. But the frame still comes from us.</p><p>So my current view is this: automation is not the replacement of human purpose. It is the acceleration, mediation, and sometimes distortion of it.</p><p>That means the future is probably not about whether humans disappear from the economy. It is more about whether humans move up the stack toward choosing goals, defining values, and directing systems&#8212;or whether those powers get concentrated into fewer hands while everyone else interacts with the outputs.</p><p>In other words, the deepest question around AI may not be &#8220;What will get automated?&#8221; It may be: &#8220;Whose will gets turned into reality faster?&#8221;</p><p>Because as long as humans exist, they will keep wanting things, imagining things, changing things, resisting things, building things, and reaching for better states than the ones they are currently in. That is not going away. The means of production may become highly automated. The means of execution may become almost instant. But the reason any of it exists in the first place is still human beings trying to shape the world around them.</p><p>And I think that&#8217;s the part that matters most.</p><div><hr></div><p>If this resonated with you, share it with someone else who has been wrestling with the same questions. We&#8217;re all trying to build a mental model for what AI is really changing. I suspect a lot of us are still using old language for a new phenomenon because the conversation around AI is still happening at the level of tasks and jobs automation.<br>I think the deeper change is about intention, agency, and whose will gets amplified by these systems. If that framing feels useful to you, send it to a friend, a builder, or anyone else thinking seriously about where this is all going.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.whycurious.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading WhyCurious! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why Can’t You Take Your Own Advice? ]]></title><description><![CDATA[Exploring why we struggle to give advice to ourselves when we are great at doing it for others, and what to do about it.]]></description><link>https://www.whycurious.com/p/why-cant-you-take-your-own-advice</link><guid isPermaLink="false">https://www.whycurious.com/p/why-cant-you-take-your-own-advice</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Wed, 09 Apr 2025 13:11:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9dYW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19090fc6-b3e4-4b93-8fc4-aa08ad9c4074_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Why is it easier to give advice to other people and still be very confused about our own situations? Seems counter intuitive, because we have more context over our own lives, so we should be able to arrive at perfect advice for ourselves. But things are often not that clear.</p><p>There are several reason for this I think, where biggest one being our own biases. Our emotions are deeply intertwined with our personal situations. This makes it hard to be objective. Fear, anxiety, self-doubt, or even wishful thinking can cloud our judgment and lead us down paths that aren&#8217;t truly in our best interests. When we advise others, we have more emotional distance, allowing for clearer perspectives.</p><p>Also, we know our own stories inside and out&#8212;every detail, every thought. This can actually hinder us. We get bogged down by irrelevant information or past experiences, making it hard to see the bigger picture or fresh solutions. With others, we focus on the core issue, unburdened by those intricate details.</p><p>Offering advice is to other people is not risky because we don&#8217;t have to face the consequences. But when it comes to taking our own advice, it requires action and the possibility of failure. It&#8217;s much easier to tell a friend to leave a bad relationship than to actually leave one ourselves. Because in our case, our judgement is clouded.</p><p></p><p>This bias based on our past, leads us to conceive few logical future paths. But there can be many more paths that are inconceivable to us, because we are blind to them because of our biases. But you can largely fix this.</p><p>To solve this problem of giving advice to self, we need to keep few things in mind:</p><ol><li><p>Get outside perspective. We&#8217;ve established that our perspective can be limited by our own biases and emotions. Just like we can give other people objective advice, we should seek such advice from others.</p></li><li><p>Write down your daily thoughts and see if there are hidden and recurring patterns. Journaling is very powerful.</p></li><li><p>Distance yourself emotionally from the situation. Imagine what would you do if it was happening to someone else. This can lead to new insights.</p></li><li><p>Use mental models. Learn about cognitive biases, read Charlie Munger&#8217;s excellent speech here: <a href="https://fs.blog/great-talks/psychology-human-misjudgment/">https://fs.blog/great-talks/psychology-human-misjudgment/</a>. At any point, there are several of these biases in play, so identify them and see if they need to be addressed.</p></li></ol><p></p><p>[Original Publish Date: May 30, 2024]</p>]]></content:encoded></item><item><title><![CDATA[Breadth is Free, Depth is Expensive]]></title><description><![CDATA[What if your five-year plan had to happen in six months?]]></description><link>https://www.whycurious.com/p/breadth-is-free-depth-is-expensive</link><guid isPermaLink="false">https://www.whycurious.com/p/breadth-is-free-depth-is-expensive</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Tue, 08 Apr 2025 18:20:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-x-5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-x-5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-x-5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-x-5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-x-5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-x-5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-x-5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg" width="1344" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:369940,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.whycurious.com/i/160883333?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-x-5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-x-5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-x-5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-x-5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ed9217-f0e4-429c-bb39-8fd223c0cf99_1344x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>What if your five-year plan had to happen in six months?</strong><br><br>as if your life depended on it.</p><p>Could you do it?</p><p>At first, you might think, &#8220;Okay, I&#8217;ll just work faster. Stay up later. Push harder.&#8221;<br>But very quickly, you&#8217;ll notice something strange. That doesn&#8217;t work. Not really.<br>You run out of hours. You hit a wall.</p><p>And then, if you&#8217;re paying attention, you might see a deep truth hiding underneath everything:</p><p>The universe gives us <strong>three directions in space</strong>, but only <strong>one direction in time</strong>.</p><p>That&#8217;s it. You can move left, right, up, down, forward, backward&#8212;but when it comes to time? You only get one step at a time. Always forward. No skipping. No rewinding.</p><p>So how do you do things faster? How do startups go from idea to millions of users in a blink? How do some people seem to live five lives at once?</p><p>The answer is <strong>leverage</strong>.</p><p>Leverage is how you get more output per unit of time.</p><p>I first saw this idea come to life in a short tweet from Andrej Karpathy. He was talking about why Transformer models (technology behind chatgpt) work so well. His words stuck with me:</p><blockquote><p>&#8220;Breadth is free and depth is expensive.&#8221;</p></blockquote><p>In simpler words: doing lots of things at the same time (breadth) is easy.<br>Doing one thing after another (depth) takes time&#8212;and time is expensive.</p><p>That hit me hard.</p><p>Because we are taught that we should avoid multitasking. Do one thing at a time, and somehow this lesson takes us further from thinking in terms of leverage. And it feels unnatural to us when we encounter someone who has it.</p><p>Karpathy was talking about AI, but it felt true <em>everywhere</em>. I couldn&#8217;t <em>unsee</em> it.</p><p>It was like someone handed me a lens to look at the world. Suddenly, I saw everything differently.</p><p>This idea connected with something Naval Ravikant once said:</p><blockquote><p>&#8220;Fortunes require leverage. Business leverage comes from capital, people, code, and media.&#8221;</p></blockquote><p>It made sense before. But now I <em>felt</em> it. I <em>understood</em> how it actually operates.</p><p>Leverage is a method that let you move work away from time and into space. By doing this, you go from being dependent on slow and one-by-one nature of time to using the fast and all-at-once nature of space.</p><p>Let&#8217;s take a simple example: adding 100 random numbers together.</p><p>Most people think you have to add them one by one, right?<br>1 + 2, then add 3, then 4&#8230; it&#8217;s slow. It&#8217;s all in a line.</p><p>But imagine you had 100 people helping you.<br>You pair up numbers at the same time. Then pair up the results.<br>And suddenly what used to take 100 steps now takes just a few.<br>That&#8217;s leverage.</p><p>Bill Gates didn&#8217;t build Microsoft by typing faster than everyone else.<br>He built systems. Thousands of people and millions of lines of code working in parallel.<br>Even now, when he&#8217;s not there, Microsoft still runs, still grows.<br>That&#8217;s what it means to shift from depth to breadth.</p><p>So how do you really apply this to your day to day life ? Imagine you have a list of tasks, 100s of them.<br>A good strategy would be to find leverage points&#8212;actions that unlock a <em>chain reaction</em>.<br>One smart move that sparks ten more.</p><p>But not everything can be done in parallel. Some things still move on the slow path of time. They&#8217;re stuck. They resist being sped up.</p><p>Take SEO, for example.<br>You can hire 50 writers to create content. That&#8217;s parallel.<br>But earning Google&#8217;s trust? Building backlinks? That part still takes time.<br>You can&#8217;t rush it.</p><p>These time-linked parts become your <strong>real bottlenecks</strong>&#8212;your biggest limits.</p><p>So what should you do?</p><p>Two things:</p><ol><li><p><strong>Move as much as you can into parallel work</strong>&#8212;hire, delegate, automate, code.</p></li><li><p><strong>Handle the truly time-based stuff with care</strong>&#8212;plan for it, protect it, and don&#8217;t waste it.</p></li></ol><p>Of course, this gets tricky with people.<br>People aren&#8217;t machines. They make mistakes. They forget things.<br>So you&#8217;ll need systems that catch errors and fix them.<br>But even those systems take time to build and manage.<br>That&#8217;s why some businesses&#8212;like consulting or handmade crafts&#8212;can only grow so far.<br>Their value is deeply tied to time, skill, and individual care.</p><p>That&#8217;s the deeper insight:</p><ul><li><p>Great entrepreneurs <em>know</em> what can scale&#8212;and what can&#8217;t.</p></li><li><p>They push everything they can into parallel systems.</p></li><li><p>And they focus hard on solving the bottlenecks that truly need time.</p></li></ul><p>This changes how you think.</p><p>You stop chasing to-do lists.<br>You start building machines.<br>You think in code, in teams, in systems, in content.<br>You build things that <em>work while you sleep</em>.</p><p>Look at what you&#8217;re building today.</p><p>Are you stuck going deeper into a time trap?<br>Or are you stepping sideways&#8212;into space, into systems, into leverage?</p><p>Ask yourself that. Be honest.</p><p>You can achieve big things if you stop going faster in a straight line and start moving smarter in <em>every</em> direction the universe allows. Use the three dimensions as much as you can, because in every moment, you have limitless quantity of it.</p>]]></content:encoded></item><item><title><![CDATA[Prioritisation Under Extreme Uncertainty]]></title><description><![CDATA[The era of exponential opportunities is here. There's too much to do. People who master their own attention and other people's attention will win. Here's how to master your own attention.]]></description><link>https://www.whycurious.com/p/prioritisation-under-extreme-uncertainty</link><guid isPermaLink="false">https://www.whycurious.com/p/prioritisation-under-extreme-uncertainty</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Sat, 05 Apr 2025 09:00:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9dYW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19090fc6-b3e4-4b93-8fc4-aa08ad9c4074_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Startups have very few resources. Like OpenAI in the early days had less than google Deepmind, given their goal of AGI. They could run fewer experiments. So, how does one decide what is worth doing in such a resource constrained + uncertain environment ? How to pick a task.</p><p>Most prioritisation methods work well when you have a lot of data and resources and are built for large organisations. For very small teams dealing with extreme uncertainty, you don&#8217;t have data early on to make clear decisions for prioritisation. So how do you move forward ?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.whycurious.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading WhyCurious! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Based on all my reading on prioritisation and my own experience in last one year as a Startup Founder, here&#8217;s what I&#8217;ve landed on.</p><p>At a high level, there are only 2 things to worry about:</p><ul><li><p>Survive (Stay in the game)</p></li><li><p>Achieve goal</p></li></ul><p>Depending on your situation, you may have to shift your main goal between these 2 modes. Once you know which one is your goal, we can move forward.</p><h4>Step 1: Goal &amp; KPI Alignment</h4><ul><li><p><strong>Action</strong>: Does this task serve the ultimate goal ? If not, cut it.</p></li><li><p><strong>Filter</strong>: Yes/No. No alignment, no go.</p></li></ul><h4>Step 2: Minimum Cut</h4><ul><li><p><strong>Action</strong>: If you could only do one thing, would this make the cut?</p></li><li><p><strong>Question</strong>: Is this the most essential move toward the goal?</p></li><li><p><strong>Filter</strong>: Score necessity (1-5). Only 3+ survive.</p></li></ul><h4>Step 3: Leverage Potential</h4><ul><li><p><strong>Action</strong>: Does it unlock big wins&#8212;second/third-order impacts, multipliers, or compounding effects?</p></li><li><p><strong>Question</strong>: Will this create exponential progress down the road?</p></li><li><p><strong>Filter</strong>: Score leverage (1-5). Aim for 4+ to prioritize game-changers.</p></li></ul><h4>Step 4: Resource Fit + 80/20</h4><ul><li><p><strong>Action</strong>: Can we do it with what we have (time, money, compute, people)? Can we 80/20 it&#8212;get 80% of the value with 20% of the effort?</p></li><li><p><strong>Question</strong>: What&#8217;s the bottleneck? How do we hack it leaner and faster?</p></li><li><p><strong>Filter</strong>: Score feasibility (1-5). Redesign heavy tasks for less. 3+ to proceed.</p></li></ul><h4>Step 5: Learning Speed</h4><ul><li><p><strong>Action</strong>: How fast will we get feedback to learn or pivot?</p></li><li><p><strong>Question</strong>: When do we know if it works or fails?</p></li><li><p><strong>Filter</strong>: Score speed (1-5). Favor 4+ for quick iteration.</p></li></ul><h4>Step 6: Team Fit</h4><ul><li><p><strong>Action</strong>: Who&#8217;s the best mind to nail it efficiently?</p></li><li><p><strong>Question</strong>: Does this play to our strengths?</p></li><li><p><strong>Filter</strong>: Score fit (1-5). 3+ ensures execution edge.</p></li></ul><h4>Step 7: Worth-It Score</h4><ul><li><p><strong>Action</strong>: Add scores from Steps 2-6 (out of 25).</p></li><li><p><strong>Formula</strong>: Minimum (5) + Leverage (5) + Resource (5) + Speed (5) + Fit (5) = Total.</p></li><li><p><strong>Decision</strong>:</p><ul><li><p>20-25: Do now&#8212;high impact, low risk.</p></li><li><p>15-19: Plan soon if resources align.</p></li><li><p>10-14: Backburner&#8212;revisit later.</p></li><li><p>Below 10: Cut unless mandatory.</p></li></ul></li></ul><h4>Step 8: Gut Check</h4><ul><li><p><strong>Action</strong>: Ask: &#8220;If I skip this, will I regret it in 6 months?&#8221;</p></li><li><p><strong>Filter</strong>: If gut screams yes but score&#8217;s low, dig deeper. Otherwise, trust the score. (Tiebreaker: Pick fastest or highest leverage.)</p><p></p></li></ul><p>Once you have a list of tasks that pass the above regimen, they are worth doing. The final step is to maximise throughput. Here&#8217;s how to do it.</p><h4>Step 9: Maximise Throughput</h4><ul><li><p><strong>Action</strong>: Define the smallest, smartest next step. From the shortlist, can tasks run in parallel? How do we sequence them for least time?</p></li><li><p><strong>Question</strong>: What&#8217;s the immediate move? How do we max throughput?</p></li><li><p><strong>Filter</strong>: Map dependencies, group parallel tasks, order by bottlenecks or speed.</p><p></p></li></ul><p>Test it out, remove steps that are not important for your case. Or if you have data, then use RICE framework.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.whycurious.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading WhyCurious! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Insights about LLMs]]></title><description><![CDATA[PS: highly unstructured and unedited draft. But good enough to be valuable to you.]]></description><link>https://www.whycurious.com/p/insights-about-llms</link><guid isPermaLink="false">https://www.whycurious.com/p/insights-about-llms</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Thu, 30 May 2024 13:35:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9dYW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19090fc6-b3e4-4b93-8fc4-aa08ad9c4074_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><strong>Insights from dwarkesh patel&#8217;s podcast</strong></p><p>With longer context, gemini was able to learn an obscure human language, completely in context.</p><p>During inference, attention operation is linear with respect to context length. The quadratic nature is a problem while training only.</p><p>Longer context yields better problem solving ability. Predicting the next token becomes easier with longer context. In that respect, it is already superhuman. Superhuman context allows it a working memory that humans don&#8217;t have. It may have interesting implications. This is unique to LLMs.</p><p>More learning is now happening in the forward pass, instead of fixed training.</p><p>GPT2 -&gt; GPT3, the key shift was that GPT3 could do meta learning. It was completely unexpected and emergent behaviour of scale.</p><p>By having more context, the model gets more forward passes during inference, and therefore can learn more complex things.</p><p>Human brain is recurrent in nature. It can spend more compute on harder problems, and keep going deeper. But there&#8217;s always a finite number of forward passes you can do. Technically, jhuman language supports infinite recursion, but practivally, you only see 5-7 levels because of human limiatatino of working memory. So, adding more layers can get to human intelligence, to the extent it lets us close to this level of recursion.</p><p>Anthropic has a definition of what LLMs are doing. They are like a neural computer, doing read/<a href="https://web.archive.org/web/20240530105150/https://whycurious.com/write-even-if-youre-bad-at-it/">write</a> operations.</p><p>70% of human neurons are in cerebellum, which controls fine motor skills and attention. If you try to model via maths, the mechanism to retrieve information from a lossy source, it resembles the cerebellum architeture in variety of species, and it looks similar to the attention mechanism of neural networks we have today. So, there&#8217;s a 3 way convergence here.</p><p>Most intelligence is pattern matching and association. If you can have a large enough heirarchy of associations, you can do pattern matching effectively.</p><p>If you query your brain for someone&#8217;s face in a rainy and dark setting, it returns something, and then you update the query until you get something that matches the reality, and then that also queries something that gives you associated memories of that person. Similarly, when you query alphabet A, it gives B, and so on.</p><p>Human memory is reconstructive, and is linked with imagination. When you are recalling a memory, you are getting a dense representation of it back, and then you are reconstructing it back, so there are imaginary bits in there.</p><p>Ability of sherlock holmes to see clues, and then have a higher level abstract representation of their associations and pattern match against them.</p><p>If intelligence is just associations all the way down, then is worry about intelligence explosion justified, given that it&#8217;ll just be building higher level associations only, and be bounded by what humans can do, just faster.</p><p><br>Insights from Ilya about progress in LLM capabilities</p></blockquote><ul><li><p>Reliability, where we can fully trust the LLM to not miss critical details. Like a summary will not miss an obviously important detail.</p></li><li><p>Progress is going to astound us.</p></li><li><p>Ability to clearly follow the instructions and intent of its operator.</p></li><li><p>Multimodality. Ingest and produce all modalities. Results in a better world model.</p></li><li><p>GPT4 had fundamental architecture improvements that makes it a better next token predictor. Focus on predicting the next token will lead to universal reasoning.</p></li><li><p>Reliability jumps are to be expected</p></li></ul><blockquote><p><strong>Notes from a16z podcast b/w marc and ben</strong></p><p>build data moats and let better ai help you. a research app that has data from curated sources like a16z podcast, etc for investors and founders.</p><p>alignment makes models dumber</p><p>models will know more hallucinate less, so think of all the axes and see where growth is inevitable. context length. speed. problem solving.</p><p>tests for superhuman reasoning skills.</p><p>LLMs simulates average inteligence on its base prompt. But data from smart humans is in there. It needs better prompting to access that part in the latent space. You can unlock the latent supergenius. Maybe you can finetune on smart people data only.</p><p>Can AI invent new physics. That kind of intelligence is 1 in 3billion in humans.</p></blockquote><ul><li><p>I think yes. The building blocks are imagination and rationality. And you can do that with AI. Question is, how much compute you are willing to spend on that. A smarter AI will get to that in less compute budget.</p></li><li><p>Given a baseline intelligence, you can just let it spend more compute to simulate higher intelligence.</p></li></ul><blockquote><p>AI currently is better at validation and scoring, than generation. People miscalculate this because they anthropomorphise it, by thinking that how can something be better at evaluation and not be equally good at generation. But they are 2 separate functions inside the LLM that do those things and one can be better than other.</p></blockquote><ul><li><p>One alpha is to find such capability pairs, where one is superhuman in LLMs and can aid the human with its counterpart.</p></li></ul><blockquote><p>Future improments are going to come from</p></blockquote><ul><li><p>more compute, and training for longer on same data. Currently there&#8217;s a constraint on that.</p></li><li><p>More data. There&#8217;s lots of it in the world and startups are getting it.</p></li><li><p>Higher quality data. MS trained a good model on higher quality smaller dataset.</p></li><li><p>Better talent being funneled in.</p></li></ul><blockquote><p>GPT Wrapper Argument</p></blockquote><ul><li><p>Traditionally the platform layer is just a building block. The Product then uses it to solve a problem for the custoemr by their unique understanding of their pain points and workflows.</p></li><li><p>If you are building something that is analogous to this, is likely to work. Where you are using the platform to extract a functionality, that the platform is not designed to do. And getting that functionality out of the platform is really hard. That;s the way to go.</p></li><li><p>Work backward from pricing and value you can offer. e.g, debt collection using AI agents. OpenAI gpt is not going to collect debt on your behalf. You need specific integration and domain knowledge on top of base intelligence. You can use this as a test for your <a href="https://web.archive.org/web/20240530105150/https://whycurious.com/delta-4-theory-of-startup-evaluation/">idea</a>, by thinking how much can you charge. So, value capture tells you a lot about defensibility of your idea. Can you charge the value that you are creating ? Or are you just charging for the amount of work you did extra by creating a wrapper, and are hoping that your customer won&#8217;t do so.</p></li><li><p>It&#8217;s clear that model layer is going to be commoditised, and people can just plug and play different model. So, the value is going to accrue to the tools and orchestration layer.</p></li></ul><p><strong><a href="https://web.archive.org/web/20240530105150/https://whycurious.com/hacking-your-own-lifes-source-code/">See also Introspection for Self Alignment</a></strong></p><p>Economics of Models</p><ul><li><p>Better models increase the software quality and decreases the number of develoers needed to create it, so paradoxically, it can create more demand for software and increase quality expectations because there&#8217;s more competition. Like the use of CGI in hollywood made the audience expectations rise and movie making costs even more now even with efficiency improvements.</p></li><li><p>Humans have an ability to come up with new things they need. What are going to we need next?</p><ul><li><p>More things up the ladder</p></li></ul></li><li><p>Demand for software is perfectly elastic. As price goes down, demand goes up. As soon as constraints go down, people always find a way to automate more things.</p></li><li><p>Things that are high dimensional are a good fit for AI. Like medical diagnoses for you everyday based on biomarkers and bloodwork.</p></li></ul><p>Moats</p><ul><li><p>Data is moat is the new cliche. But it&#8217;s not a moat. huge amount of data on internet trumps your specific data that may be useful in corner cases. There&#8217;s no market for data. If data had value then you&#8217;d see large marketplaces for it. There&#8217;s a small marketplace for sure.</p></li><li><p><strong>A16Z improved their investor relations product through use of their data on company performance, where their LPs can ask the question to AI about the current track record.</strong></p></li><li><p>A very specific kind of data has value. In most cases you can just increase your own competitiveness by using it.</p></li><li><p>Nobody has data they can directly sell to others. But most have data they can feed to an intelligence and improve their business.</p></li><li><p>large companies are using personal data probably because they are not releasing any info around that.</p></li></ul><p>Internet vs AI</p><ul><li><p>Internet was a network and AI is more like a new kind of computer.</p></li><li><p>That decides the kind of competitive dynamics you&#8217;ll have and the opportunities.</p></li><li><p>Internet enabled applications that run on top of networks and enable network effects at scale and positive feedback loops.</p></li><li><p>There are some network effects in it, but it is more like microprocessor. Information goes in, and an output comes out.</p></li><li><p>LLMs are probabilistic computers.</p></li><li><p>There&#8217;s composability with AI.</p></li><li><p>The lessons from early computer market are more applicable.</p><ul><li><p>Original computers were few and large in size, dominated by large corps.</p></li><li><p>People thought that very few people would need compute, like mega corps only.</p></li><li><p>Nowadays there&#8217;s the idea that there&#8217;s few large god models. But if we follow the computer trajectory, we now have chips everywhere in all shapes and sizes.</p></li><li><p>Modern cars have 200 computers. Everything is running on electricity, has a chip and is connected to internet.</p></li><li><p>If forms a kind of tree, with large supercomputers sitting in datacenters, and then IOT devices at the leaves, with PCs, smartphones in the middle. And the kind you use depends on what you need.</p></li></ul></li><li><p>This means we&#8217;ll have different sizes of models. It&#8217;ll be an ecosystem of models.</p></li><li><p>Earlier the complexity of using computers was high.</p></li><li><p>AI is the easiest computer to use because it uses english. What is the lock in here ? Size, price, speed, choice ? Do you have free choice across these dimensions or are you locked in to the God model.</p></li><li><p>Every AI company is going to get funded and a lot of them will go bust and there&#8217;ll be an overbuild out of chips. Some chip companies will go bankrupt. Investors will lose a lot of money. We don&#8217;t need that many AI companies.</p></li><li><p>This is just the nature of all technology. Hype cycles help us build the infra.</p></li><li><p>Internet went through an open phase. Initially networks used to be proprietory and then internet showed up and everything opened up, until some big companies ended up owning the discovery part, like Google.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Writing for Clarity]]></title><description><![CDATA[Skilful writing is indistinguishable from critical thinking as it forces you to clarify and reorganise your thoughts]]></description><link>https://www.whycurious.com/p/writing-for-clarity</link><guid isPermaLink="false">https://www.whycurious.com/p/writing-for-clarity</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Thu, 21 Mar 2024 13:41:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9dYW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19090fc6-b3e4-4b93-8fc4-aa08ad9c4074_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine trying to solve a massive jigsaw puzzle in your head. Not gonna happen, right? You need all the pieces laid out in front of you so you can move them around and see how they fit together. That&#8217;s exactly how writing for clarity feels. You start with jumbled thoughts that barely make sense, but as you <a href="https://web.archive.org/web/20240322110012/https://whycurious.com/write-even-if-youre-bad-at-it/">write</a>, new ideas emerge and connections become clearer. Before you know it, you&#8217;ve got a complete picture &#8211; and a much clearer understanding of your own thinking.</p><p>The more I use writing to clear my head, the more it becomes second nature. Whenever I&#8217;m dealing with a tough problem or a decision with lots of moving parts, I feel an instinctive urge to start writing.</p><p>Writing has helped me see the difference between fuzzy thinking and clear thinking. Before I started writing things down, my thoughts were almost always muddled and jumbled. There were kernels of good ideas and thoughts there, but they were underdeveloped, and not really thought through. I didn&#8217;t even realise it because I didn&#8217;t know any better. If you&#8217;ve never tried using writing to solve problems, there&#8217;s a good chance your thinking is stuck in the &#8220;fuzzy zone&#8221; too.</p><p>A mistake that some people make is that they write things down in a note like format. They lay down all the pieces, but don&#8217;t do the work of organising them. You need to treat this as if you&#8217;re going to publish it in a big magazine, like it&#8217;s something really important. All the magic is in that process. Because thats when you&#8217;ll see which ideas don&#8217;t fit, or which ones are completely flawed and need to be thrown out.</p><p>Now, I feel way more confident tackling even the toughest problems. When an issue pops up, whether in my personal life or at work, I usually give it a few days to simmer. But if it starts feeling more complicated than I first thought, I know that writing it out will help me find the best solution.</p><p>Honestly, it feels like a superpower! My core problem-solving skills haven&#8217;t changed, but simply writing things down and exploring different scenarios in written format leads to way better outcomes. Writing is a multiplier on your core problem solving ability.</p><p>Writing also helps me put all those fancy frameworks I&#8217;ve learned from books into practice. Trying to run through a checklist, apply different frameworks, and then make a decision &#8211; all in my head &#8211; is just too hard. Writing allows for that kind of deliberate, step-by-step thinking. When the problem and its potential consequences are laid out in front of you, it&#8217;s easier to see the big picture and think several steps ahead. You can be as thorough as you need to be, and your mind has all the context it needs to crack the problem wide open.</p>]]></content:encoded></item><item><title><![CDATA[Life’s Work]]></title><description><![CDATA[If this is the only thing you ever accomplish in your life, would you still do it ?]]></description><link>https://www.whycurious.com/p/lifes-work</link><guid isPermaLink="false">https://www.whycurious.com/p/lifes-work</guid><dc:creator><![CDATA[Karunakar Gautam]]></dc:creator><pubDate>Sat, 16 Mar 2024 13:44:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9dYW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19090fc6-b3e4-4b93-8fc4-aa08ad9c4074_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is a question you can ask yourself to verify if what you are working on is your life&#8217;s work.</p><p>It&#8217;s a very high bar, so it&#8217;s ok if the answer is No. But try to move in the direction of yes. It&#8217;s usually not clear when you start working on something. But as you go deeper, and the answer remain a no for years on end, it&#8217;s time to re-evaluate things.</p>]]></content:encoded></item></channel></rss>