<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Hackerspot: AI Security]]></title><description><![CDATA[A focused collection of clear, practical notes on AI, machine learning, and how to secure the systems built with them. Expect short guides, real-world examples, and hands-on explanations of model risks, safe deployment, and modern attack techniques. Simple, useful, and built for engineers who actually want to understand what’s going on.]]></description><link>https://www.hackerspot.net/s/ai-security</link><generator>Substack</generator><lastBuildDate>Sun, 10 May 2026 18:45:00 GMT</lastBuildDate><atom:link href="https://www.hackerspot.net/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Hackerspot]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[hackerspot@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[hackerspot@substack.com]]></itunes:email><itunes:name><![CDATA[Chady]]></itunes:name></itunes:owner><itunes:author><![CDATA[Chady]]></itunes:author><googleplay:owner><![CDATA[hackerspot@substack.com]]></googleplay:owner><googleplay:email><![CDATA[hackerspot@substack.com]]></googleplay:email><googleplay:author><![CDATA[Chady]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[How Does AI Actually Learn? ]]></title><description><![CDATA[Training, Data, and Loss Functions Explained]]></description><link>https://www.hackerspot.net/p/how-does-ai-actually-learn</link><guid isPermaLink="false">https://www.hackerspot.net/p/how-does-ai-actually-learn</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Sun, 10 May 2026 16:11:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JdNx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>How does AI learn? Training an AI model isn&#8217;t magic. It&#8217;s a mechanical process: you show the model examples, measure how wrong it is, and adjust its internal knobs to be less wrong. Repeat millions of times, and you get a model that works.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JdNx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JdNx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JdNx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JdNx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JdNx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JdNx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg" width="812" height="488" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:488,&quot;width&quot;:812,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:127689,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JdNx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JdNx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JdNx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JdNx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9af7f3e8-884a-4c8c-bb94-048980385f80_812x488.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s the machinery underneath.</p><h2>The Training Pipeline: Data to Model</h2><p>Before training even starts, you need a plan for your data.</p><p>You collect raw data (emails, images, transactions, sensor readings&#8212;whatever your problem requires). You clean it (remove garbage, fix errors, handle missing values). You normalize it (scale numbers to a consistent range so the model doesn&#8217;t get confused by different units). Then you split it into three parts: a training set, a validation set, and a test set.</p><p>The <strong>training set</strong> is what the model learns from. You show it thousands of examples, and the model adjusts itself based on what it sees.</p><p>The <strong>validation set</strong> is a referee. While training happens, you periodically check the model against data it&#8217;s never seen before. If the model is overfitting&#8212;memorizing training examples instead of learning general patterns&#8212;the validation set will catch it. The model never learns from validation data; it&#8217;s only for observation.</p><p>The <strong>test set</strong> is a final exam. You keep it locked away until training is completely done. Only then do you measure the model&#8217;s real-world accuracy on data it&#8217;s truly never encountered.</p><p>This separation is critical. If you test on the same data the model was trained on, you&#8217;ll get an inflated score that doesn&#8217;t reflect how the model will perform on new problems.</p><h2>Loss Functions: The Scoreboard</h2><p>How does the model know it&#8217;s wrong?</p><p>A <strong>loss function</strong> measures how bad the model&#8217;s predictions are. The lower the loss, the better the model. Different problems use different loss functions.</p><p>For a spam filter, the loss might be: &#8220;How many emails did you misclassify?&#8221; If the model predicts &#8220;spam&#8221; for an email that&#8217;s actually legitimate, the loss goes up.</p><p>For an image classifier that identifies dog breeds, the loss might measure the probability distance between the predicted label and the true label. If the model is 90% confident it&#8217;s a poodle but it&#8217;s actually a dachshund, the loss is high. If it&#8217;s 95% confident it&#8217;s a dachshund, the loss is lower.</p><p>Here&#8217;s a concrete example:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V_s8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V_s8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 424w, https://substackcdn.com/image/fetch/$s_!V_s8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 848w, https://substackcdn.com/image/fetch/$s_!V_s8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 1272w, https://substackcdn.com/image/fetch/$s_!V_s8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V_s8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png" width="1352" height="268" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:268,&quot;width&quot;:1352,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:40492,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.hackerspot.net/i/193809563?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V_s8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 424w, https://substackcdn.com/image/fetch/$s_!V_s8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 848w, https://substackcdn.com/image/fetch/$s_!V_s8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 1272w, https://substackcdn.com/image/fetch/$s_!V_s8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F498593be-39fb-443e-8e0f-cf79aff64635_1352x268.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>Gradient Descent: Rolling Downhill</h2><p>Now, how does the model actually adjust itself?</p><p>Imagine you&#8217;re blindfolded at the top of a hill, trying to reach the lowest point. You can&#8217;t see the whole landscape. You feel the slope under your feet, and you take a small step downhill. Then you check the slope again and take another step. Repeat long enough, and you&#8217;ll reach a valley.</p><p><strong>Gradient descent</strong> is this process. The model calculates the slope of the loss function with respect to each of its parameters (called the &#8220;gradient&#8221;). Then it takes a small step in the direction that reduces loss. It does this thousands or millions of times.</p><p>The word &#8220;gradient&#8221; sounds fancy but it just means: &#8220;In which direction does the loss go down, and how steep is it?&#8221;</p><h2>Backpropagation: Assigning Blame</h2><p>Gradient descent needs to know which parameters to adjust. This is where <strong>backpropagation</strong> comes in.</p><p>Backpropagation is the mechanism that calculates how much each internal parameter contributed to the error. It works backward from the output, asking: &#8220;How did this layer&#8217;s weights affect the mistake? And the layer before that?&#8221;</p><p>Think of it as an error audit trail. If the model predicted 95 instead of 50, backpropagation traces the error backward through every calculation and says, &#8220;This weight contributed 3 to the error. That weight contributed 7. This one contributed -2.&#8221; Gradient descent then adjusts these weights based on their contributions.</p><p>You don&#8217;t need to understand the mathematics to use it. The key insight: backpropagation lets the model figure out what to fix.</p><h2>Epochs and Batch Size: The Training Rhythm</h2><p>Training happens in cycles.</p><p>An <strong>epoch</strong> is one full pass through the entire training dataset. If you have 10,000 training examples, one epoch means the model has seen all 10,000 exactly once.</p><p>But you don&#8217;t show the model all 10,000 at once. You show them in groups called <strong>batches</strong>. A batch size of 32 means you process 32 examples, calculate their total loss, backpropagate, adjust the weights, then move to the next 32. This happens because processing one example at a time is slow, and processing all of them at once requires too much memory.</p><p>A typical training run might look like: 100 epochs, batch size 32. The model sees all training data 100 times, processing it in batches of 32 each time. Loss decreases with each epoch until it plateaus. That&#8217;s when you stop.</p><h2>Data Quality Beats Algorithm Quality</h2><p>Here&#8217;s something instructors wish beginners knew: <strong>better data beats better algorithms.</strong></p><p>You can have the fanciest, most sophisticated model ever designed. But if your training data is garbage&#8212;full of errors, biased, or unrepresentative of the real world&#8212;the model will be garbage. Conversely, mediocre algorithms trained on clean, representative data often outperform fancy algorithms trained on messy data.</p><p>This is why data preparation takes longer than algorithm selection in real projects. And why data engineers are in high demand.</p><h2>The Trust Boundary: Training as a Security Gate</h2><p>The training process is a boundary where trust matters.</p><p>If someone poisons your training data&#8212;inserting malicious examples or corrupting labels&#8212;the model learns the poisoned patterns. It becomes a poisoned model. The model doesn&#8217;t know it learned the wrong thing. It&#8217;s confident. It just works based on what it saw.</p><p>This is especially dangerous with self-supervised learning and large language models. An LLM trained on poisoned text learns &#8220;facts&#8221; that are false, and those falsehoods get baked into billions of parameters. The model has &#8220;memorized&#8221; the corruption.</p><p>This is why training data provenance (knowing where it came from and who had access to it) matters in security-critical applications.</p><h2>Bringing It Together</h2><p>Training is straightforward in outline: prepare data &#8594; measure loss &#8594; calculate gradients &#8594; adjust weights &#8594; repeat. But this simple loop, repeated millions of times on billions of examples, produces systems that can recognize patterns humans barely see.</p><p>The key to good models isn&#8217;t fancy mathematics. It&#8217;s clean data, a sensible loss function, and patience.</p>]]></content:encoded></item><item><title><![CDATA[Supervised, Unsupervised, and Reinforcement Learning: What’s the Difference?]]></title><description><![CDATA[Machine learning isn&#8217;t one monolith.]]></description><link>https://www.hackerspot.net/p/supervised-unsupervised-and-reinforcement</link><guid isPermaLink="false">https://www.hackerspot.net/p/supervised-unsupervised-and-reinforcement</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Mon, 04 May 2026 04:30:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!w8BP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Machine learning isn&#8217;t one monolith. The way an AI system learns depends entirely on what data you have and what problem you&#8217;re solving. There are three main categories&#8212;supervised, unsupervised, and reinforcement learning&#8212;each built on a different principle.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w8BP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w8BP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 424w, https://substackcdn.com/image/fetch/$s_!w8BP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 848w, https://substackcdn.com/image/fetch/$s_!w8BP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!w8BP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w8BP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg" width="872" height="580" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:580,&quot;width&quot;:872,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:158527,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w8BP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 424w, https://substackcdn.com/image/fetch/$s_!w8BP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 848w, https://substackcdn.com/image/fetch/$s_!w8BP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!w8BP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b2bb65-0969-4692-a6c8-3eb1bf817f33_872x580.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Supervised Learning: Learning With a Teacher</h2><p>Supervised learning works exactly as it sounds: the model learns from examples labeled with the correct answers.</p><p>You show the model thousands of emails marked &#8220;spam&#8221; or &#8220;not spam.&#8221; You show it thousands of medical images with a diagnosis already attached. You show it credit card transactions labeled &#8220;fraud&#8221; or &#8220;legitimate.&#8221; The model sees the input (the email text, the image, the transaction details) paired with the correct output, and learns to predict that output for new, unseen data.</p><p>This is the workhorse of applied AI. If you have labeled data, supervised learning is usually your first choice.</p><p><strong>Real example:</strong> A bank wants to detect fraudulent transactions. They have historical data: millions of past transactions, each marked as either fraud or legitimate. The bank trains a supervised model on this data. When a new transaction arrives, the model predicts &#8220;fraud&#8221; or &#8220;legitimate&#8221; based on patterns it learned from the labeled examples.</p><p>Supervised learning does have a catch: someone has to label the data. For simple cases like emails (spam filters were manually curated for years), that&#8217;s feasible. For medical imaging, you need expert radiologists. Labeling is expensive, time-consuming, and sometimes requires domain expertise. And if the labels are wrong, the model learns the wrong thing&#8212;a vulnerability we&#8217;ll return to later.</p><h2>Unsupervised Learning: Finding Patterns Without Answers</h2><p>Unsupervised learning flips the script. You give the model unlabelled data and say: &#8220;Find patterns.&#8221;</p><p>The model isn&#8217;t trying to predict a specific output. It&#8217;s trying to discover structure. It might cluster customers into groups based on their shopping behaviour without being told what those groups should be. It might identify which transactions look weird compared to the crowd&#8212;potential fraud or system errors. It might compress images into a smaller representation that captures the essential structure while discarding noise.</p><p>Because there&#8217;s no &#8220;correct answer,&#8221; unsupervised learning is messier to evaluate. You have to decide whether the patterns the model found are useful. But it&#8217;s powerful when you have tons of unlabelled data and want to explore it without predefined categories.</p><p><strong>Real example:</strong> An e-commerce platform has millions of user sessions but hasn&#8217;t manually categorised them. They run unsupervised clustering and discover that users naturally group into three distinct patterns: bargain hunters (frequent price checking), comparison shoppers (research-heavy), and impulse buyers (quick checkout). The platform never labelled these groups&#8212;the model found them.</p><p>The trade-off is looser control. You can&#8217;t easily specify what patterns you want to find. The model might find patterns that are statistically real but not useful for your business. It takes experimentation.</p><h2>Reinforcement Learning: Learning Through Reward and Penalty</h2><p>Reinforcement learning is the third path: the model learns by interacting with an environment and receiving rewards or penalties for its actions.</p><p>There&#8217;s no labelled training set. Instead, imagine a game-playing AI. It makes a move, sees the result, and gets a reward (if the move was good) or a penalty (if the move was bad). Over millions of games, it learns which moves tend to lead to victory. It never saw examples of &#8220;the correct move&#8221;&#8212;it discovered them through trial and error, guided by the reward signal.</p><p>Reinforcement learning powers game-playing systems like AlphaGo. It&#8217;s used in robotics (robots learn to walk by trial and error, getting rewarded for forward progress). It&#8217;s used in recommendation systems where the &#8220;reward&#8221; is whether a user clicks on a recommendation.</p><p>The catch: you have to design the reward carefully. If your reward signal is poorly designed, the system might find creative&#8212;and useless&#8212;ways to maximise it. An AI tasked with moving as fast as possible might learn to spin in circles instead of reaching the goal. We call this &#8220;reward hacking.&#8221;</p><h2>The Variants: Semi-Supervised and Self-Supervised</h2><p>Two hybrid approaches deserve mention.</p><p><strong>Semi-supervised learning</strong> uses a mix of labelled and unlabelled data. When labelling is expensive, you label a small portion of your data, then use unsupervised techniques on the unlabelled portion to improve your model&#8217;s performance. It&#8217;s a practical compromise.</p><p><strong>Self-supervised learning</strong> is newer and increasingly important. The model generates its own labels from structure in the data. For example, if you&#8217;re training on text, you might mask out a word and ask the model to predict it. No human labeller needed. Modern large language models (LLMs) are trained this way: they learn by predicting the next word in a sentence, which is an automatically-generated label that requires no human effort. This approach has made scaling possible.</p><h2>Security: The Dark Side of Each Approach</h2><p>Each learning paradigm has its own vulnerabilities.</p><p>In supervised learning, if an attacker poisons the labelled data&#8212;inserting examples with incorrect labels&#8212;they corrupt the model&#8217;s understanding. Imagine a spam classifier that&#8217;s been fed mislabelled emails by an attacker. It learns the wrong patterns.</p><p>In unsupervised learning, if you know the clustering boundaries the model uses, you can craft data to evade detection. An anomaly detector identifies outliers based on distance from cluster centres. If an attacker knows those centres, they can craft a transaction or behaviour that hides inside a normal cluster.</p><p>In reinforcement learning, an attacker can exploit the reward system itself. If the system values speed and an attacker can trigger rewards in unintended ways, the AI chases those rewards instead of the intended goal.</p><p>In self-supervised learning, poisoning the training data has a subtle but serious effect: the model learns corrupted structure and the falsehoods become baked into its weights. An LLM trained on poisoned text learns to &#8220;know&#8221; things that aren&#8217;t true.</p><h2>So Which One Do I Use?</h2><p>There&#8217;s no universal answer. The choice depends on what data you have, what problem you&#8217;re solving, and what kinds of errors you can tolerate.</p><ul><li><p>Use supervised learning when you have labelled data and a clear prediction target.</p></li><li><p>Use unsupervised learning when you want to explore unlabelled data or detect anomalies without predefined categories.</p></li><li><p>Use reinforcement learning when you can simulate interaction with an environment and design a reward signal.</p></li></ul><p>Most real systems use a hybrid approach. And whatever you choose, remember: the learning mechanism is a trust boundary. Poisoned data produces poisoned models.</p>]]></content:encoded></item><item><title><![CDATA[What Is an AI Model, Actually? ]]></title><description><![CDATA[The Concept Explained Simply]]></description><link>https://www.hackerspot.net/p/what-is-an-ai-model-actually</link><guid isPermaLink="false">https://www.hackerspot.net/p/what-is-an-ai-model-actually</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Sun, 26 Apr 2026 16:34:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KjNx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>An AI model is not software in the way you know software. It&#8217;s not a program with if-then statements. It&#8217;s a mathematical function with learned parameters&#8212;numbers that have been adjusted to recognize patterns in data.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KjNx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KjNx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KjNx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KjNx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KjNx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KjNx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg" width="850" height="489" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:489,&quot;width&quot;:850,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:163333,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KjNx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KjNx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KjNx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KjNx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66c5351f-203d-49d3-aa76-293bab06feaa_850x489.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Think of it like this: the <em>architecture</em> is the recipe structure. The <em>weights</em> (learned parameters) are the specific measurements tuned by tasting thousands of dishes.</p><h2>Model = Architecture + Weights</h2><p>The architecture is the skeleton&#8212;the layers of neurons, the way information flows through the system, and the rules that map inputs to outputs. You define the architecture. It&#8217;s the blueprint.</p><p>The weights are everything else. They&#8217;re numbers&#8212;sometimes billions of them. Each weight is a tiny adjustment that helps the model recognize patterns. You don&#8217;t define them; training does.</p><p>Here&#8217;s a concrete example. A simple image classifier might have this architecture:</p><ul><li><p>Input layer (the image pixels)</p></li><li><p>Hidden layer 1 (256 neurons)</p></li><li><p>Hidden layer 2 (128 neurons)</p></li><li><p>Output layer (10 categories: cat, dog, bird, etc.)</p></li></ul><p>The architecture tells you the shape. But there are millions of weights between those neurons. Those weights determine what the model actually &#8220;knows.&#8221; The same architecture trained on different data will have different weights and behave completely differently.</p><h2>What a Model Actually Does</h2><p>A model takes input and produces output. Here are some real examples:</p><ul><li><p><strong>Image model:</strong> you feed it a photo &#8594; it outputs a label (cat, dog, bird)</p></li><li><p><strong>Language model:</strong> you feed it text &#8594; it outputs more text (a completion, an answer, a translation)</p></li><li><p><strong>Audio model:</strong> you feed it sound &#8594; it outputs a transcript or classification</p></li><li><p><strong>Tabular model:</strong> you feed it a row of numbers &#8594; it outputs a prediction (will this customer churn?)</p></li></ul><p>The model doesn&#8217;t &#8220;think&#8221; in the way humans do. It doesn&#8217;t have reasoning or understanding. It&#8217;s a statistical function. Given input X, it produces output Y based on patterns it learned from training data.</p><p>For a language model like ChatGPT, the input is text. The model predicts the next word based on the previous words. Then it predicts the next word after that. And so on. Each prediction is a probability distribution over possible words.</p><p>It sounds simple because it is simple. The magic (and the mystery) comes from scale. Billions of parameters adjusted on trillions of words produce a system that <em>appears</em> to understand language. It&#8217;s actually pattern matching at extraordinary scale.</p><h2>The Model File: Just Weights</h2><p>When you download or run a model, what you&#8217;re actually getting is a file containing all those learned weights. Common formats include <code>.pkl</code> (pickle), <code>.safetensors</code>, <code>.pth</code> (PyTorch), or <code>.bin</code> (HuggingFace).</p><p>Inside that file: weights. Billions of decimal numbers. That&#8217;s the entire model. The architecture is usually defined separately (in code), but the weights are the actual learned knowledge.</p><p>This matters more than you might think. That model file <em>is</em> the system. If someone modifies the weights&#8212;even slightly&#8212;the model&#8217;s behavior changes. If a weight is corrupted, the output becomes unreliable. If a weight is deliberately tampered with, the model can be made to misbehave.</p><p>This is why the security of model files matters. An untrustworthy source for a model file is untrustworthy, full stop.</p><h2>Why Model Files Can Be Dangerous</h2><p>Pickle files (<code>.pkl</code>) deserve special mention because they can execute code when loaded. This is a legacy of how Python pickle works&#8212;it was designed to serialize arbitrary Python objects, including functions. An attacker can craft a malicious pickle file that runs code the moment you load it.</p><p>If you download a model in pickle format from an untrusted source and load it, you&#8217;re potentially running arbitrary code. Safer formats like <code>.safetensors</code> don&#8217;t have this vulnerability; they only contain numbers.</p><h2>Models Are Not Programs</h2><p>This is the mental shift that matters. A traditional program has logic you can read: function calls, conditionals, loops. A model has none of that. You can&#8217;t open a large language model and read &#8220;here&#8217;s where it decides whether to be helpful.&#8221; The behavior emerges from the weights.</p><p>This means:</p><ul><li><p>Models are harder to audit. You can&#8217;t trace a decision path like you can in code.</p></li><li><p>Models are harder to explain. You can&#8217;t point to a line and say &#8220;this caused the output.&#8221;</p></li><li><p>Models fail in unexpected ways. They don&#8217;t fail because of a bug in your if-then logic; they fail because the pattern they learned doesn&#8217;t generalize.</p></li></ul><h2>The Practical Reality</h2><p>In practice, when you use ChatGPT or Claude, you&#8217;re downloading (or accessing via API) a model file with billions of weights. The companies behind those models spent months training them on massive amounts of text using specialized hardware. Then they saved the weights to a file.</p><p>When you type a question, that file (the weights) processes your text through its learned patterns and produces an answer. The answer reflects what the model learned during training, for better and worse.</p><p>You&#8217;re not running a program. You&#8217;re querying a statistical function that&#8217;s been tuned to be useful.</p><h2>What is Next</h2><p>In the next post, we&#8217;ll look at different types of learning: supervised learning (where you have labels), unsupervised learning (where you don&#8217;t), and reinforcement learning (where the system learns from rewards and penalties).</p><p>For now, the key insight: an AI model is a mathematical function with parameters learned from data. The architecture is the shape. The weights are the knowledge. The model file is the saved state of that knowledge. Understanding this separates mystique from reality.</p>]]></content:encoded></item><item><title><![CDATA[How Did We Get Here? The 70-Year History of AI in 5 Minutes]]></title><description><![CDATA[AI didn&#8217;t arrive overnight.]]></description><link>https://www.hackerspot.net/p/how-did-we-get-here-the-70-year-history</link><guid isPermaLink="false">https://www.hackerspot.net/p/how-did-we-get-here-the-70-year-history</guid><dc:creator><![CDATA[Hackerspot Team]]></dc:creator><pubDate>Mon, 20 Apr 2026 22:04:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!o_6u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI didn&#8217;t arrive overnight. The field spent decades in the valley before climbing back out. Understanding where we came from explains why the present moment is actually different.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o_6u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o_6u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 424w, https://substackcdn.com/image/fetch/$s_!o_6u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 848w, https://substackcdn.com/image/fetch/$s_!o_6u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 1272w, https://substackcdn.com/image/fetch/$s_!o_6u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o_6u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png" width="1456" height="913" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:913,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3949382,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hackerspot.net/i/193737129?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!o_6u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 424w, https://substackcdn.com/image/fetch/$s_!o_6u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 848w, https://substackcdn.com/image/fetch/$s_!o_6u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 1272w, https://substackcdn.com/image/fetch/$s_!o_6u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fb2dd9b-52f7-445b-9000-df25d48eb41e_1924x1206.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>We&#8217;re Going to Solve Thinking (1950s&#8211;1970s)</h2><p>In 1956, researchers at Dartmouth Summer Research Project coined the term &#8220;artificial intelligence.&#8221; They were optimistic&#8212;maybe too optimistic. The idea was that you could program a computer to reason like a human: give it rules and logic, and it would solve problems.</p><p>This &#8220;symbolic AI&#8221; approach ruled for decades. Engineers would manually write rules: if X, then Y. If the weather is rainy, then bring an umbrella. Simple. Clean. Wrong about almost everything complex.</p><p>By the 1970s and 1980s, reality had landed hard. The systems couldn&#8217;t handle the messiness of real data. They broke on edge cases. Funding evaporated. This first &#8220;AI winter&#8221; lasted years&#8212;not because the researchers were incompetent, but because the promise had outrun the technology.</p><p><strong>The lesson:</strong> Hype without compute is just noise.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OUwM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OUwM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OUwM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OUwM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OUwM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OUwM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg" width="946" height="355" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:355,&quot;width&quot;:946,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:156089,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OUwM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OUwM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OUwM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OUwM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe969ba7e-c99a-4253-b022-b77f263d2632_946x355.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Rise and Stall of Statistical Learning (1980s&#8211;2000s)</h2><p>The field pivoted. Instead of hand-coding rules, why not let data teach the system? This was the birth of machine learning, statistical methods capable of learning patterns from examples.</p><p>By the 1990s and 2000s, these methods worked. Banks deployed neural networks to read handwritten checks. Spam filters learned what junk email looked like. Kaggle competitions crowned winners with algorithms called Gradient Boosting Machines (GBMs), statistical models that combined weak predictors into strong ones.</p><p>But progress stalled again. These methods were narrow: a model trained to recognize faces couldn&#8217;t suddenly translate English. Each task needed its own hand-engineered pipeline. The systems were brittle.</p><p>This wasn&#8217;t hype this time&#8212;the math worked. The problem was computing. Good statistical learning needs a lot of data, but good <em>deep</em> learning needs vastly more. CPUs couldn&#8217;t keep up.</p><h2>The Deep Learning Inflection: 2012 and Beyond</h2><p>Then GPUs happened.</p><p>In 2012, a team used graphics processors (hardware originally designed for video games) to train a deep neural network on image recognition. The network was called AlexNet. It crushed the competition, cutting error rates nearly in half. The jump was so large that the field collectively paused and said, &#8220;Oh. <em>That&#8217;s</em> what we&#8217;ve been waiting for.&#8221;</p><p>Deep learning worked because it scaled. More layers, more parameters, more compute. And crucially, with enough data and enough compute, you didn&#8217;t need engineers to hand-craft features. The network learned what to look for.</p><p>By the mid-2010s, deep learning was everywhere: computer vision, speech recognition, and machine translation. </p><p>Researchers noticed something: a new architecture called <strong>Transformers</strong> (introduced in a 2017 paper titled <a href="https://en.wikipedia.org/wiki/Attention_Is_All_You_Need">&#8220;Attention Is All You Need&#8221;</a>) worked even better. Unlike previous models that read text one word at a time from left to right, Transformers could process entire sequences simultaneously. This "parallelization" allowed them to handle massive datasets with incredible speed, forming the technical foundation for everything that came next.</p><h2>The Large Language Model Era: 2020 to Now</h2><p>Starting in 2020, companies began scaling Transformer networks to absurd sizes. OpenAI&#8217;s GPT-3, released in 2020, had 175 billion parameters&#8212;numbers representing learned patterns. For context: a typical brain has about 86 billion neurons. GPT-3 wasn&#8217;t a brain, but it was scaled to a similar order of magnitude.</p><p>Then ChatGPT launched in late 2022. It was a GPT-3 variant, fine-tuned to answer questions in conversational English. It hit 1 million users in five days.</p><p>Since then: Claude (Anthropic), Gemini (Google), and countless others. The pattern is consistent: scale up, add more compute, train on more text, get smarter.</p><h2>Why Now Is Actually Different</h2><p>Here&#8217;s what matters: compute is the through-line. AI winters happened when promises exceeded compute capacity. Algorithms didn&#8217;t improve miraculously in 2012; GPUs made existing algorithms finally viable.</p><p>In 2019, researcher Richard Sutton summarized this shift in an essay titled <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">&#8220;The Bitter Lesson.&#8221;</a> His point was a blow to human ego: general methods that leverage massive computing always beat &#8220;clever&#8221; approaches where humans try to bake their own knowledge into the system. The field spent 70 years trying to be smart; it turns out that being &#8220;big&#8221; was the more effective strategy.</p><p>This is why 2020&#8211;2025 feels different: we have the compute. We understand the architecture. We have enough data. The constraint that killed AI twice before,&#8221; we don&#8217;t have enough resources to make this work,&#8221; has lifted.</p><h2>The Cost of Progress: New Vulnerabilities</h2><p>Each wave of AI introduced new security surfaces. Symbolic AI could fail in obvious ways. Statistical models were opaque but narrowly scoped. Deep learning is opaque <em>and</em> scaled to billions of parameters.</p><p>A model file containing billions of learned weights is now the system. Because these systems are pattern-matchers rather than reasoners, they lack an internal &#8220;truth check.&#8221; This has led to vulnerabilities such as&nbsp;<strong>Prompt Injection</strong>, in which a model is tricked into ignoring its safety guidelines. As we head into 2026, the threat has evolved into <strong>Indirect Prompt Injection</strong>, in which an AI can be subverted simply by reading a malicious website or document, turning the entire internet into a potential attack surface.</p><p>The attack surfaces keep evolving. So does the defense.</p><h2>The Actual Arc</h2><p>The 70-year history of AI is not a genius suddenly striking. It&#8217;s: promise, failure, reset, waiting for hardware, breakthrough, scale, repeat. Three phases: symbolic logic failed. Statistical learning stalled. Deep learning accelerated.</p><p>We&#8217;re in the deep learning phase now, and the resources have finally aligned. But the story isn&#8217;t over. As we move through 2026, the focus is shifting from raw scaling to <strong>reasoning efficiency</strong>, creating models that don&#8217;t just know everything, but can &#8220;think&#8221; through a problem before they speak. The next chapter isn&#8217;t just about more data; it&#8217;s about what we do with the intelligence we&#8217;ve finally managed to build.</p>]]></content:encoded></item><item><title><![CDATA[What Is AI, Machine Learning, and Deep Learning?]]></title><description><![CDATA[Three terms the internet loves to mix up, here&#8217;s what they actually mean, no jargon required.]]></description><link>https://www.hackerspot.net/p/ai-machine-learning-and-deep-learning</link><guid isPermaLink="false">https://www.hackerspot.net/p/ai-machine-learning-and-deep-learning</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Mon, 13 Apr 2026 21:54:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pIOH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You&#8217;ve heard all three terms. You&#8217;ve probably used them interchangeably. But AI, machine learning, and deep learning are not the same thing, and understanding the difference is the first step to understanding why AI systems are <strong>inherently fragile</strong>, how their "learning" can be turned against them, and why they often behave in ways that <strong>defy human logic</strong></p><blockquote><p>Please note that this post is the first of our <strong>AI Security series</strong>, where we bridge the gap between high-level hype and technical reality. Before we dive into the specialized vulnerabilities of these systems, we must first talk about the basics. </p><p>By establishing a clear, jargon-free understanding of how these technologies differ and how they learn, we lay the groundwork for the more complex security and architectural topics to follow in this series.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pIOH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pIOH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 424w, https://substackcdn.com/image/fetch/$s_!pIOH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 848w, https://substackcdn.com/image/fetch/$s_!pIOH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 1272w, https://substackcdn.com/image/fetch/$s_!pIOH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pIOH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png" width="730" height="479" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:479,&quot;width&quot;:730,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:637240,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hackerspot.net/i/192378690?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6653efa-63ac-47e7-9693-8f54521454ea_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!pIOH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 424w, https://substackcdn.com/image/fetch/$s_!pIOH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 848w, https://substackcdn.com/image/fetch/$s_!pIOH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 1272w, https://substackcdn.com/image/fetch/$s_!pIOH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b0a071e-ae82-4eaf-941f-993d757436d4_730x479.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>AI Is the Big Tent</h2><p><strong>Artificial intelligence</strong> (AI) is the broadest term. It refers to any system that exhibits intelligent behavior &#8212; reasoning, problem-solving, learning, or decision-making &#8212; that we&#8217;d normally associate with humans.</p><p>That definition is deliberately wide. A rule-based system that plays chess using handwritten rules counts as AI. So does a neural network that generates images from text. They&#8217;re very different technologies, but both fall under the AI umbrella.</p><p>The key idea is that AI is the goal (machine intelligence), not a specific technique.</p><h2>Machine Learning Is How Most Modern AI Actually Works</h2><p><strong>Machine learning</strong> (ML) is a subset of AI. Instead of writing explicit rules, you show the system thousands (or millions) of examples, and it figures out the patterns on its own.</p><p>Think of it this way. You could write rules to identify spam email: &#8220;if the subject contains &#8216;FREE MONEY&#8217;, mark as spam.&#8221; But attackers adapt. Rules break. Machine learning takes a different approach: show the system 10 million emails labeled &#8220;spam&#8221; or &#8220;not spam&#8221;, and it learns to recognize the patterns itself &#8212; including patterns you never thought to write a rule for.</p><p>The core principle: ML systems <strong>generalize</strong>. They learn from past examples and apply that learning to new, unseen data. That&#8217;s what makes them powerful. It&#8217;s also what makes them fragile in ways traditional software isn&#8217;t &#8212; a topic we&#8217;ll come back to throughout this series.</p><h2>Deep Learning Is ML With Many Layers</h2><p><strong>Deep learning</strong> (DL) is a subset of machine learning. It uses artificial neural networks, loosely inspired by how neurons connect in the brain, with many layers stacked on top of each other. That&#8217;s the &#8220;deep&#8221; part.</p><p>Each layer learns to recognize increasingly abstract features. In an image recognition system:</p><ul><li><p>Layer 1 might detect edges</p></li><li><p>Layer 5 might detect shapes</p></li><li><p>Layer 20 might detect &#8220;cat ears.&#8221;</p></li></ul><p>Deep learning is why we can now build systems that recognize faces, transcribe speech, translate languages, and generate text with remarkable fluency. It powers virtually every AI product you interact with today &#8212; from spam filters to ChatGPT.</p><p>The hierarchy, in plain terms:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NOlG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NOlG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 424w, https://substackcdn.com/image/fetch/$s_!NOlG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 848w, https://substackcdn.com/image/fetch/$s_!NOlG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 1272w, https://substackcdn.com/image/fetch/$s_!NOlG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NOlG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png" width="1456" height="258" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:258,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:70907,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.hackerspot.net/i/192378690?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NOlG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 424w, https://substackcdn.com/image/fetch/$s_!NOlG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 848w, https://substackcdn.com/image/fetch/$s_!NOlG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 1272w, https://substackcdn.com/image/fetch/$s_!NOlG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F293bbadd-266b-4ada-8f10-5af74021dd39_1808x320.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>Why Compute Beat Cleverness</h2><p>Here&#8217;s one of the most important, and counterintuitive, lessons from 70 years of AI research.</p><p>Researchers spent decades trying to build cleverer algorithms. Handcrafting rules, encoding human knowledge, designing elegant mathematical models. And they were consistently outperformed by one simple strategy: <strong>throw more data and more computing power at a simpler approach</strong>.</p><p>Richard Sutton, a pioneer in AI research, called this &#8220;the bitter lesson&#8221; in 2019: general methods that leverage computation are ultimately the most effective, by a large margin.</p><p>What this means in practice: modern AI progress is driven less by brilliant new algorithms and more by scale &#8212; bigger datasets, more powerful GPUs, more parameters. GPT-3, the model behind early ChatGPT, has 175 billion parameters. Its successor models are larger still.</p><p>This has a direct security implication. Scale means complexity, and complexity means more attack surface. A system with 175 billion parameters is not something any human can fully inspect or understand. That opacity is a security property &#8212; and not a good one.</p><h2>What AI Is Actually Good At?</h2><p>A quick litmus test from the training material helps here. AI tends to work well when:</p><ul><li><p>The problem isn&#8217;t already solved by simpler means</p></li><li><p>You have enough good-quality training data</p></li><li><p>Some margin of error is acceptable</p></li><li><p>The patterns you&#8217;re learning from are relatively stable over time</p></li></ul><p>It tends to fail &#8212; sometimes catastrophically &#8212; when:</p><ul><li><p>The situation is genuinely novel (unlike anything in the training data)</p></li><li><p>100% accuracy is required</p></li><li><p>The underlying patterns change faster than the model can be retrained</p></li><li><p>The training data was biased, poisoned, or just plain wrong</p></li></ul><p>That last bullet is where security gets interesting. The training data is a trust boundary. If an attacker can influence what a model learns from, they can influence what the model does &#8212; permanently, and invisibly. More on that in Series 4.</p><h2>Conclusion</h2><p>AI, ML, and deep learning are not interchangeable buzzwords. They&#8217;re a nested hierarchy of increasingly specific techniques, all built on the same core idea: learn patterns from data rather than encode rules by hand.</p><p>What makes this matter for security is exactly what makes it powerful: these systems learn behaviors that nobody explicitly programmed. That means the attack surface includes the data, the training process, the model file, and the inference pipeline &#8212; not just the application code sitting on top.</p><p>The rest of this series builds the foundation you need to understand all of that. Next up: how we got from &#8220;AI&#8221; being coined as a term in 1956 to ChatGPT in 2022 &#8212; and what the detours tell us about where the real risks live.</p>]]></content:encoded></item><item><title><![CDATA[Scaling Your Engineering Impact with Agents]]></title><description><![CDATA[A Framework for Engineering with AI Agents]]></description><link>https://www.hackerspot.net/p/mastering-coding-agents</link><guid isPermaLink="false">https://www.hackerspot.net/p/mastering-coding-agents</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Fri, 10 Apr 2026 16:30:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!41-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are moving past the era of the chatbot. Today, <strong>coding agents</strong> are beginning to handle the heavy lifting of implementation, but they are only as good as the engineer directing them. Much like a musical instrument, an agent can produce 'slop' or a masterpiece; the difference lies in your technique. I&#8217;ve put together a few simple shifts to help you move from writing every line of code to orchestrating the bigger picture</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!41-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!41-b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 424w, https://substackcdn.com/image/fetch/$s_!41-b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 848w, https://substackcdn.com/image/fetch/$s_!41-b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!41-b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!41-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg" width="876" height="526" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:526,&quot;width&quot;:876,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:163901,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!41-b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 424w, https://substackcdn.com/image/fetch/$s_!41-b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 848w, https://substackcdn.com/image/fetch/$s_!41-b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!41-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56195875-540a-4b7b-90b5-4ce845776642_876x526.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Access to Verification</h2><p>The single most important factor in an agent&#8217;s success is whether it has access to <strong>verification</strong>. Without it, the agent is simply &#8220;guessing&#8221; based on patterns.</p><ul><li><p><strong>Provide Tool Access</strong>: Agents need to do what humans do: run the application, view logs, and perform tests.</p></li><li><p><strong>Tighten the Feedback Loop</strong>: When an agent can see the output of its work&#8212;such as reading logs from a <strong>CI</strong> server&#8212;the quality of its code improves substantially.</p></li><li><p><strong>Test the Tests</strong>: Agents often write code and tests at the same time, which can lead to tests that pass &#8220;by construction&#8221;. Always ask the agent to introduce a <strong>regression</strong> to ensure the test actually catches the error.</p></li></ul><h2>Work in &#8220;Plan Mode&#8221;</h2><p>Don&#8217;t ask an agent to do everything at once. You will get better results by separating the &#8220;thinking&#8221; from the &#8220;doing&#8221;.</p><ul><li><p><strong>The Power of Plan Mode</strong>: In this mode, a <strong>system prompt</strong> strictly forbids the agent from writing code. This allows the agent to use all its resources to understand the problem and design an <strong>architecture</strong>.</p></li><li><p><strong>Human-Led Design</strong>: You must still do the work to break down large, messy problems into small, manageable tasks. If the scope is too big, agents may confidently produce &#8220;slop&#8221;, thousands of lines of code containing hidden bugs.</p></li></ul><blockquote><p><strong>System Prompt</strong>: The background instructions that tell the AI how to behave (e.g., &#8220;do not write any code&#8221;).</p></blockquote><h2>Manage the &#8220;Context Window&#8221;</h2><p>An AI&#8217;s &#8220;memory&#8221; is known as its <strong>context window</strong>. If this window gets too full, the AI&#8217;s performance &#8220;drops off a cliff&#8221;.</p><ul><li><p><strong>The 50% Rule</strong>: Try to keep your conversation history below <strong>50%</strong> of the context window to maintain high accuracy.</p></li><li><p><strong>Fresh Starts</strong>: If an agent starts going in circles or <strong>hallucinating</strong>, the context is likely &#8220;corrupted&#8221;. It is often better to close the session and start a new one.</p></li><li><p><strong>Track State in Markdown</strong>: Keep a <code>.md</code> file in your codebase to track project progress. This allows a new agent session to &#8220;read the file&#8221; and catch up instantly without wasting memory.</p></li></ul><blockquote><p><strong>Context Window</strong>: The maximum amount of information (text and code) an AI can &#8220;remember&#8221; at one time.</p><p><strong>Hallucination</strong>: When an AI confidently provides information that is false or incorrect.</p></blockquote><h2>Additional Tips for Better Results</h2><ul><li><p><strong>Pick the Right Language</strong>: Agents are currently most effective with <strong>TypeScript</strong> and <strong>Go</strong> because their libraries are &#8220;source available&#8221; (the AI can read the actual code). They struggle more with the <strong>JVM</strong> (Java/Kotlin) because those libraries are often bytecode that the agent cannot read.</p></li><li><p><strong>Use High-Quality Models</strong>: Cheaper models often waste time and <strong>tokens</strong> by spiraling or deleting code they don&#8217;t understand. Using a top-tier model often solves the problem on the first try.</p></li><li><p><strong>Encode Skills</strong>: If you find yourself giving the same instructions repeatedly, turn them into a <strong>Skill</strong>. This is like giving the agent a permanent &#8220;how-to&#8221; guide for a specific task.</p></li></ul><blockquote><p><strong>Tokens</strong>: The basic units (words or parts of words) that AI models use to process and &#8220;read&#8221; text.</p><p><strong>Skill</strong>: A saved set of instructions that an agent can automatically use whenever it needs to perform a specific job.</p></blockquote><h2>Conclusion: From Code Writer to Orchestrator</h2><p>The arrival of AI doesn&#8217;t minimize the need for great engineers; it changes what they focus on. In the past, value was measured by the &#8220;depth&#8221; of knowledge in a narrow niche. Today, value is shifting toward <strong>breadth</strong>.</p><p>Because the agent can handle the &#8220;depth&#8221; of implementation, the human engineer must provide the &#8220;breadth&#8221; of general knowledge. Understanding how networking, security, and architecture connect allows you to act as an <strong>orchestrator</strong>, delegating tasks while maintaining the high-level judgment that keeps the system robust.</p><p>Don&#8217;t be discouraged if your first hour with a coding agent feels clunky. It takes practice to develop the skill to use them well. Keep experimenting, keep breaking down your problems, and always give your agent a way to verify its work.</p>]]></content:encoded></item><item><title><![CDATA[Is Your Security Team Scalable? Why LLMs are the Only Answer]]></title><description><![CDATA[The Caffeine Pill for Security Teams]]></description><link>https://www.hackerspot.net/p/is-your-security-team-scalable-why</link><guid isPermaLink="false">https://www.hackerspot.net/p/is-your-security-team-scalable-why</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Fri, 27 Mar 2026 16:31:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VVvV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Security teams have too much work and not enough time. There is a huge gap between the amount of new code being written and the number of people available to check it. I want to share how LLMs can help. We can use AI to act on your team's behalf, helping you work faster and focus on real threats.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VVvV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VVvV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VVvV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VVvV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VVvV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VVvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg" width="924" height="411" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:411,&quot;width&quot;:924,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:115689,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VVvV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VVvV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VVvV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VVvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62eebb2f-a0af-49a7-8982-021372e8a7e0_924x411.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Understanding the AI Engine</h3><p>Before building AI tools, it is important to understand the technical rules that govern how these models process data. Knowing that models are stateless helps you design better systems that rely on context rather than memory.</p><ul><li><p><strong>Tokens and Context</strong>: AI reads words in small pieces called &#8220;tokens,&#8221; which represent about 3/4 of a word.</p></li><li><p><strong>Stateless Nature</strong>: Most modern AI models are stateless, meaning they do not &#8220;learn&#8221; or change their internal weights while you are talking to them.</p></li><li><p><strong>Memory</strong>: Because the AI is stateless, it doesn&#8217;t remember your last question; to give it &#8220;memory,&#8221; you must include the previous parts of the conversation in your new request.</p></li><li><p><strong>Data Quality</strong>: It is better to give the AI high-quality information (context) in your prompt&#8212;sometimes up to 128k tokens&#8212;than to try and &#8220;train&#8221; or fine-tune the model itself.</p></li></ul><h3>Checking Projects Faster (SDLC)</h3><p>The Software Development Life Cycle (SDLC) is the process of building software, and in a fast company, it can be very unpredictable. Using AI to automate the initial review of these projects allows security teams to prioritize the most dangerous changes.</p><ul><li><p><strong>Risk Scoring</strong>: You can use an AI bot to read design documents and give a &#8220;risk score&#8221; and &#8220;confidence level&#8221; to show which projects need a human expert first.</p></li><li><p><strong>Watching Changes</strong>: If a developer changes a plan&#8212;for example, making a private tool public&#8212;the AI can see this change and raise the risk score immediately.</p></li><li><p><strong>Passive Monitoring</strong>: AI can watch chat channels; if it sees a developer talking about a security mistake (like skipping a password check), it can alert the security team.</p></li></ul><h3>Managing Access (IAM)</h3><p>Giving people the right permissions to use tools is often slow and creates friction for engineers. AI can simplify this by matching a user&#8217;s natural language request to the technical groups required to do their job.</p><ul><li><p><strong>Simple Language</strong>: Instead of searching for a specific technical group name, a user can describe what they need, and the AI finds the right access group for them.</p></li><li><p><strong>Smart Approvals</strong>: AI can look at how a person usually works using &#8220;cosine similarity&#8221;; if their request looks normal for their role, it can be approved faster.</p></li><li><p><strong>Audit Trails</strong>: All access granted through these AI tools is logged to create a clear history for security audits.</p></li></ul><h3>Sorting Bug Reports</h3><p>If you have a &#8220;bug bounty&#8221; program, you might get thousands of reports every day, which is too much for humans to handle. AI can act as a first filter to remove noise and send real vulnerabilities to the right people.</p><ul><li><p><strong>Filtering the Noise</strong>: AI can quickly read reports and close the ones that are just complaints or &#8220;out of scope,&#8221; like missing email headers.</p></li><li><p><strong>Directing Traffic</strong>: The AI can send payment issues to the billing team and general model errors to the safety team, so security engineers only see real technical bugs.</p></li><li><p><strong>Improving Quality</strong>: AI can even ask the reporter for more information, like a missing URL, before a human ever has to look at the ticket.</p></li></ul><h3>Finding Attackers in Logs</h3><p>Reviewing computer logs is a &#8220;needle in a haystack&#8221; problem where humans often get tired and miss important data. LLMs are consistently good at finding these small signs of an attack within massive amounts of noisy data.</p><ul><li><p><strong>Log Summarization</strong>: AI is great at finding one bad command hidden in thousands of lines of logs, such as a malicious one-liner used to start a reverse shell.</p></li><li><p><strong>Interactive Remediation</strong>: If a user does something risky by accident, such as sharing a file publicly, a bot can message them to ask if it was intentional.</p></li><li><p><strong>summarization for Defense</strong>: The AI summarizes these user conversations and sends them back to the incident response team for a final check.</p></li></ul><h3>Tips About Using AI</h3><p>To get the best results from AI in a security context, you must move past simple trial-and-error and use data-driven methods. Following these expert tips will ensure your AI tools are helpful and accurate.</p><ul><li><p><strong>Treat it like an Expert</strong>: Always tell the AI: &#8220;You are an expert security engineer.&#8221; It will give you much better answers than if you treat it like an average worker.</p></li><li><p><strong>Use Data, Not &#8220;Vibes&#8221;</strong>: Do not just guess whether the AI is working; use an &#8220;Evaluation Framework&#8221; with known-good answers to check the AI and improve your prompts.</p></li><li><p><strong>Self-Correction</strong>: You can even use a second, smaller AI model to check the answers of the first model to ensure they are correct.</p></li><li><p><strong>Keep Humans Involved</strong>: AI is not perfect and can &#8220;hallucinate&#8221; (make things up). A human should always be &#8220;in the loop&#8221; to review disputes or make high-stakes decisions.</p></li></ul><p>Using these tools is easier than you think. By using AI for the &#8220;boring&#8221; parts of security, you allow your human experts to focus on the most important work.</p>]]></content:encoded></item><item><title><![CDATA[Moving Software Security from “Human Speed” to AI]]></title><description><![CDATA[How AI agents and autonomous reasoning are ending the era of manual patching]]></description><link>https://www.hackerspot.net/p/the-future-of-software-security-moving</link><guid isPermaLink="false">https://www.hackerspot.net/p/the-future-of-software-security-moving</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Fri, 13 Mar 2026 16:30:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!T5BW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The AI hype is going full speed, and we are currently losing the race against hackers. While attackers use fast, automated tools to find flaws, we still rely on people to fix them by hand. This creates a dangerous gap. We can no longer manage security manually; we need AI agents that can think and act instantly. It is time to move from a slow, human process to a fast, machine-driven defense.</p><p>The reality of modern software is that it is growing too fast for humans to manage. We have millions of lines of code, constant updates, and new threats appearing every hour. Traditional security, where a human finds a bug, writes a fix, and tests it manually, is simply too slow. We are operating at &#8220;human speed&#8221; in a world that demands &#8220;machine speed.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T5BW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T5BW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T5BW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T5BW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T5BW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T5BW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg" width="836" height="459" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:459,&quot;width&quot;:836,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:135808,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T5BW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T5BW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T5BW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T5BW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e79913e-b619-4e33-820a-f508530bef9e_836x459.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Today, I want to share a vision for an approach called <strong>Autonomous Security.</strong> This is the idea that we can use AI agents to automatically find and fix vulnerabilities, with higher quality than even the best human experts.</p><h2>Finding Vulnerabilities with &#8220;Reasoning&#8221;</h2><p>The biggest problem with traditional security scanners is that they aren&#8217;t &#8220;smart.&#8221; They look for patterns, but they don&#8217;t understand how code actually works. This leads to thousands of &#8220;false alarms&#8221; that waste our engineers&#8217; time.</p><p>The idea we are moving toward involves an <strong>Agentic Reasoning Loop</strong>. Instead of a simple scan, we use an AI agent that acts like a researcher:</p><ul><li><p><strong>It makes a hypothesis:</strong> &#8220;I think there is a flaw in how this data is processed.&#8221;</p></li><li><p><strong>It uses real tools:</strong> The AI uses debuggers and code browsers to test its theory.</p></li><li><p><strong>It proves the flaw:</strong> the agent doesn&#8217;t report a bug unless it can actually cause the program to fail (a &#8220;crash verification&#8221;).</p></li></ul><p>By requiring proof, we achieve <strong>zero false positives</strong>. We only focus on real, verified threats.</p><h2>The &#8220;Self-Healing&#8221; Codebase</h2><p>Finding a bug is only half the battle. The hardest part of my job is fixing a vulnerability without breaking the rest of the product. This is why many security patches take months to release.</p><p>We are now exploring a <strong>Rigorous Validation Pipeline</strong> for autonomous fixing. When the AI finds a flaw, it creates a &#8220;patch&#8221; and puts it through a gauntlet of tests:</p><ul><li><p><strong>Dynamic Analysis:</strong> Does the fix actually close the security hole?</p></li><li><p><strong>Static Analysis:</strong> Does the new code follow our safety standards?</p></li><li><p><strong>Differential Testing:</strong> Does the software still behave exactly the same for the end user?</p></li></ul><p>By automating this validation, we can move from a <strong>months-long</strong> patching cycle to a <strong>minutes-long</strong> cycle. The software essentially begins to &#8220;heal&#8221; itself.</p><h2>Shifting from Reactive to Proactive</h2><p>Most security work today is <strong>reactive</strong>&#8212;we fix things after they are broken. I believe the future of this field is <strong>proactive hardening.</strong></p><p>This vision has three parts:</p><ol><li><p><strong>Hardening:</strong> Automatically adding defensive layers to code as it&#8217;s being written.</p></li><li><p><strong>Auto-Mending:</strong> Using AI to clean up old, &#8220;legacy&#8221; codebases that haven&#8217;t been touched in years.</p></li><li><p><strong>Secure Generation:</strong> Training our AI models to write &#8220;secure-by-default&#8221; code, so the bugs never exist in the first place.</p></li></ol><h2>Why This Idea Changes Everything</h2><p>The goal isn&#8217;t just to make developers faster; it&#8217;s to eliminate the &#8220;security debt&#8221; that every company carries. By combining the reasoning power of AI with strict, automated testing, we can create a digital world where vulnerabilities are the exception, not the rule.</p><p>We are entering an era where our defense is finally as fast as the code we create.</p>]]></content:encoded></item><item><title><![CDATA[Let's Talk About the Security of AI Agents]]></title><description><![CDATA[AI agents introduce persistence, execution power, tool control, multi-agent orchestration, and unpredictable planning loops]]></description><link>https://www.hackerspot.net/p/lets-talk-about-the-security-of-ai</link><guid isPermaLink="false">https://www.hackerspot.net/p/lets-talk-about-the-security-of-ai</guid><dc:creator><![CDATA[Chady]]></dc:creator><pubDate>Sat, 13 Dec 2025 05:14:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q_X6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI is moving into a phase where it no longer just answers &#8212; it <em>acts</em>. LLM-driven AI agents are beginning to operate like autonomous digital workers, taking multi-step actions, interacting with live systems, and modifying environments without continuous human supervision.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!q_X6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!q_X6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 424w, https://substackcdn.com/image/fetch/$s_!q_X6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 848w, https://substackcdn.com/image/fetch/$s_!q_X6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 1272w, https://substackcdn.com/image/fetch/$s_!q_X6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!q_X6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png" width="955" height="355" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:355,&quot;width&quot;:955,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:644251,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hackerspot.net/i/159259507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!q_X6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 424w, https://substackcdn.com/image/fetch/$s_!q_X6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 848w, https://substackcdn.com/image/fetch/$s_!q_X6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 1272w, https://substackcdn.com/image/fetch/$s_!q_X6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40ed5506-add4-4a07-8f94-64445e1bcd1a_955x355.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div>
      <p>
          <a href="https://www.hackerspot.net/p/lets-talk-about-the-security-of-ai">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>