<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[TestCraftLab Notes]]></title><description><![CDATA[TestCraftLab Notes]]></description><link>https://testcraftlab.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 11:37:27 GMT</lastBuildDate><atom:link href="https://testcraftlab.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Why Hardware Thinking Builds Better Software]]></title><description><![CDATA[In my Arduino and custom PCB design hobby projects, I have made more than enough hardware mistakes to keep myself humble. Misrouted traces, incorrect footprints, forgotten pull-up resistors, swapped pins, overly optimistic assumptions about power dra...]]></description><link>https://testcraftlab.com/why-hardware-thinking-builds-better-software</link><guid isPermaLink="true">https://testcraftlab.com/why-hardware-thinking-builds-better-software</guid><category><![CDATA[Software Testing]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[QA]]></category><category><![CDATA[Testing]]></category><dc:creator><![CDATA[Konstantin Shenderov]]></dc:creator><pubDate>Fri, 21 Nov 2025 05:13:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763685089527/b0c756a8-012c-4ecd-aca9-4eb7cf39be14.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my Arduino and custom PCB design hobby projects, I have made more than enough hardware mistakes to keep myself humble. Misrouted traces, incorrect footprints, forgotten pull-up resistors, swapped pins, overly optimistic assumptions about power draw, or a component I placed too close to a heat source. When something failed, I could usually fix it with a few hours of probing, rework, and a soldering iron. In the worst case, I ordered a new batch of PCBs for a few dollars and moved on. Annoying but manageable. </p>
<p>But these small failures started to teach me something deeper. Many hardware issues only appear under a very narrow set of conditions. A temperature shift. A specific combination of loads. A long wire run acting like an antenna. A timing race that only happens when the system begins to age. It is easy to miss these cases in hobby projects, and even easier to miss them when the stakes are higher. </p>
<p>This is when I fully realized how different hardware bugs are from software bugs. In software, a bug can often be fixed with a patch or a hotfix rollout. Even if the problem is severe, distribution is still digital. In hardware, a bug stays locked inside every unit you have already manufactured. Fixing it means redesigning, reordering, recalling, or scrapping products. Sometimes the cost is not even financial. It is reputation, customer trust, supply chain delays, and months of lost market opportunity. A single overlooked detail can quietly scale into millions of dollars of damage.</p>
<p></p><figure>
  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763699069333/f2945327-2d87-4a4d-a629-d04aa7c826d1.jpeg" alt="Figure 1: When you forget that this specific GPIO does not have internal pull-ups." />
  <figcaption>
    Figure 1: When you forget that this specific GPIO does not have internal pull-ups.
  </figcaption>
</figure><p></p>
<h3 id="heading-hardware-changed-how-i-think-about-testing">Hardware changed how I think about testing</h3>
<p>Working with hardware shaped my thinking in ways I did not expect. When a mistake has a real and measurable cost, you naturally slow down. You become more intentional. You double check assumptions you once thought were obvious. You learn to think about edge cases not as “corner scenarios” but as real scenarios that will eventually happen when your product reaches scale.</p>
<p>This mindset transferred into my work as a Senior SDET. I noticed that hardware teaches you to:</p>
<ul>
<li><p><strong>Plan for failure before success.</strong> Hardware engineers expect things to go wrong and design for that reality. In software, we often test the happy path first and backfill everything else later. Hardware flipped that thinking for me.</p>
</li>
<li><p><strong>Treat constraints as design inputs.</strong> Voltage limits, timing margins, and thermal tolerances force you to think within boundaries. In software, constraints like latency, concurrency, timeout behavior, and data consistency play a similar role, and ignoring them leads to subtle system failures.</p>
</li>
<li><p><strong>Think about time differently.</strong> Hardware failures may surface minutes, hours, or weeks later. This inspired me to test long-running processes, soak tests, and slow degradation patterns much more seriously.</p>
</li>
</ul>
<p> </p><figure>
  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763699116169/1058d5d5-e2be-4222-a5d5-b69aedaa5481.jpeg" alt="Figure 2: Feeding power from an external step-down converter because the load on the on-board LDO was underestimated." />
  <figcaption>
    Figure 2: Feeding power from an external step-down converter because the load on the on-board LDO was underestimated.
  </figcaption>
</figure><p></p>
<h3 id="heading-better-testing-habits-came-directly-from-debugging-real-boards">Better testing habits came directly from debugging real boards</h3>
<p>Hardware debugging is hands-on. You cannot fake it. You grab a multimeter, oscilloscope, logic analyzer, jumper wires, and actually trace the problem. This built a habit of being systematic that helped in software too.</p>
<p>Some specific improvements that carried over:</p>
<ul>
<li><p>I write test plans that consider physical-like constraints: delays, environment changes, variability, and noise in inputs.</p>
</li>
<li><p>I value system-level testing more, not just unit-level checks.</p>
</li>
<li><p>I think in terms of failure propagation and how a small issue travels through a bigger system.</p>
</li>
<li><p>I test integration points earlier because hardware taught me that most problems hide at boundaries.</p>
</li>
<li><p>I investigate race conditions and timing issues more aggressively because hardware timing issues rarely lie.</p>
</li>
</ul>
<p> </p><figure>
  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763699134154/97380f08-8dde-41d7-a9ec-fe077353f5c4.jpeg" alt="Figure 3: When bypass capacitors were needed but missed during the design step." />
  <figcaption>
    Figure 3: When bypass capacitors were needed but missed during the design step.
  </figcaption>
</figure><p></p>
<h3 id="heading-hardware-lessons-improved-my-view-on-distributed-software-systems">Hardware lessons improved my view on distributed software systems</h3>
<p>A PCB is a distributed system with strict timing, limited bandwidth, and shared buses. It turns out that these constraints mirror things like:</p>
<ul>
<li><p>API gateways under heavy load</p>
</li>
<li><p>Microservices sharing a message bus</p>
</li>
<li><p>Slow downstream dependencies</p>
</li>
<li><p>Race conditions in event-driven architectures</p>
</li>
<li><p>Synchronization problems in I/O-heavy applications</p>
</li>
</ul>
<p>Understanding signal integrity and timing margins helped me design more realistic test scenarios for distributed software. Instead of "test request X and expect response Y," I now think about:</p>
<ul>
<li><p>What happens if the service is slow?</p>
</li>
<li><p>What if the data arrives out of order?</p>
</li>
<li><p>What if two actions collide?</p>
</li>
<li><p>What if the downstream component responds with inconsistent timing?</p>
</li>
<li><p>What if retries create unintended side effects?</p>
</li>
</ul>
<p>These questions came directly from thinking like a hardware engineer.</p>
<h3 id="heading-hardware-sharpened-my-attention-to-details">Hardware sharpened my attention to details</h3>
<p>Small hardware mistakes create big consequences. A single missing pull-up resistor can break an entire communication channel. A misaligned footprint can render a board useless. A millimeter of placement can lead to thermal issues.</p>
<p>That same discipline improved the way I review and test software:</p>
<ul>
<li><p>Variable names</p>
</li>
<li><p>API contracts</p>
</li>
<li><p>Error handling</p>
</li>
<li><p>Retry logic</p>
</li>
<li><p>State transitions</p>
</li>
<li><p>Security assumptions</p>
</li>
<li><p>Concurrency patterns</p>
</li>
<li><p>Edge-case input validation</p>
</li>
</ul>
<p>Suddenly, small details do not feel small anymore.
 </p><figure>
  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763699171931/29885908-94cd-4ebb-8977-6490347403fd.jpeg" alt="Figure 4: Combo error. One of thirty traces merged into the wrong net, and the incorrect schematic diagram led to a missing capacitor and resistor. Fixed and sealed for durability during the testing cycle." />
  <figcaption>
    Figure 4: Combo error. One of thirty traces merged into the wrong net, and the incorrect schematic diagram led to a missing capacitor and resistor. Fixed and sealed for durability during the testing cycle.
  </figcaption>
</figure><p></p>
<h3 id="heading-you-do-not-need-hardware-experience-to-learn-the-core-lesson">You do not need hardware experience to learn the core lesson</h3>
<p>The lesson is simple and applies everywhere:</p>
<p><strong>When the cost of late discovery is high, early quality becomes the best investment.</strong></p>
<p>Hardware just makes this point loud and clear. But the same thinking benefits software teams, QA teams, SDETs, product managers, and architects. It creates healthier engineering culture, fewer emergencies, and more confidence in what we ship.</p>
<p>And most importantly, it builds a mindset where quality is not "an extra step" but a natural part of the design process.</p>
<p>Have you ever learned something from a completely different field that changed how you work?</p>
]]></content:encoded></item><item><title><![CDATA[The Evolution of Trust: From On-Premises Systems to Cloud and AI Adoption]]></title><description><![CDATA[My career started in a time when most companies were focused on privacy and total control over their systems. Almost everything ran on-premises or on dedicated servers in data centers. Cloud services already existed, but only a few organizations were...]]></description><link>https://testcraftlab.com/evolution-of-trust-from-on-prem-to-cloud-and-ai</link><guid isPermaLink="true">https://testcraftlab.com/evolution-of-trust-from-on-prem-to-cloud-and-ai</guid><category><![CDATA[AI]]></category><category><![CDATA[Data security]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Konstantin Shenderov]]></dc:creator><pubDate>Thu, 20 Nov 2025 04:45:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763650797291/e5c54023-729e-4047-abee-b928f608d250.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My career started in a time when most companies were focused on privacy and total control over their systems. Almost everything ran on-premises or on dedicated servers in data centers. Cloud services already existed, but only a few organizations were willing to trust them with sensitive data.</p>
<p>With time this changed. Cloud platforms matured, their security improved, and companies slowly realized that they could save time and money by offloading part of their workload. Some moved only small components to the cloud. New startups went even further and launched their products without using any on-premises hardware at all.</p>
<p>What sounded unrealistic before became normal. IT and DevOps teams built complete infrastructures and deployment pipelines without touching a single patch cord. Many did it from their living room couch with nothing more than a laptop.</p>
<p>This shift made cloud-based data processing and storage the new standard. Physical hardware stopped being the default choice.</p>
<p>Now we are in the age of AI, and the pattern is repeating. Companies that hesitated for years to leave their on-premises servers are now giving AI assistants access to internal codebases, documentation, and workflows. They use AI to speed up development, train models with production and internal data, and bring intelligent features into their products.</p>
<p>As AI grows, the questions around privacy, data, and trust will only become more important. AI systems will process more information than ever before and companies will depend on them in ways that were hard to imagine just a few years ago. This creates a new responsibility to protect data, be transparent about how it is used, and build trust with customers and employees. The future will likely require stronger controls, better auditing, and clear rules about what information should never leave private systems. As technology becomes more powerful, the importance of using it responsibly will grow just as fast.</p>
<p>Each major technology change starts with doubt, becomes normal over time, and eventually turns into the foundation for the next step. It is interesting to watch how quickly the boundaries of what feels acceptable continue to move.</p>
]]></content:encoded></item><item><title><![CDATA[Testing AI-Integrated Products with Test Automation: Complexities and Opportunities]]></title><description><![CDATA[Artificial Intelligence is becoming a core part of modern applications, especially in client-facing UIs. From chatbots to recommendation systems, AI now powers user experiences that were unthinkable just a few years ago.
But as testers and engineers,...]]></description><link>https://testcraftlab.com/testing-ai-integrated-products-with-test-automation-complexities-and-opportunities</link><guid isPermaLink="true">https://testcraftlab.com/testing-ai-integrated-products-with-test-automation-complexities-and-opportunities</guid><category><![CDATA[automation testing ]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI Testing Tools]]></category><category><![CDATA[playwright]]></category><dc:creator><![CDATA[Konstantin Shenderov]]></dc:creator><pubDate>Thu, 02 Oct 2025 20:31:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/cckf4TsHAuw/upload/880a288d8da61e766bce8b2a258a6bd4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Artificial Intelligence is becoming a core part of modern applications, especially in client-facing UIs. From chatbots to recommendation systems, AI now powers user experiences that were unthinkable just a few years ago.</p>
<p>But as testers and engineers, we face a difficult question: <strong>how do we test something that doesn’t always give the same response?</strong></p>
<h3 id="heading-the-challenge-of-non-deterministic-ai"><strong>The Challenge of Non-Deterministic AI</strong></h3>
<p>Traditional test automation works best in deterministic systems. For example:</p>
<ul>
<li><p>If I click “Add to Cart” on an e-commerce site, I can assert that the cart count increases by one — always.</p>
</li>
<li><p>If I request <code>/api/products</code>, the schema and values are predictable.</p>
</li>
</ul>
<p>With AI-driven systems, outputs vary:</p>
<ul>
<li><p>The same chatbot question might return different answers.</p>
</li>
<li><p>A recommendation engine may surface different items over time.</p>
</li>
<li><p>AI copilots in IDEs might generate multiple correct-but-different code snippets.</p>
</li>
</ul>
<p>This variability makes rigid assertions fragile. If we test for exact strings or specific items, most tests fail even though the system is working correctly.</p>
<h3 id="heading-where-assertions-still-work"><strong>Where Assertions Still Work</strong></h3>
<p>Automation is not useless in AI contexts, it just shifts focus. We can still validate outputs meaningfully at several levels:</p>
<ol>
<li><p><strong>E2E UI tests</strong></p>
<ul>
<li>Ensure the flow works: inputs trigger AI responses, responses render in the UI.</li>
</ul>
</li>
<li><p><strong>API/schema validation</strong></p>
<ul>
<li><p>Check response structure, presence of required fields, and metadata like response time or confidence scores.</p>
</li>
<li><p>Example: <code>"title"</code>, <code>"highlights"</code>, <code>"season"</code> must always exist.</p>
</li>
</ul>
</li>
<li><p><strong>Functional guardrails</strong></p>
<ul>
<li><p>Validate content is relevant, safe, and aligned with the product’s intent.</p>
</li>
<li><p>Example: a “budget travel” query should never suggest private jets.</p>
</li>
</ul>
</li>
</ol>
<p>Assertions here are <strong>generic and flexible</strong>, focusing on presence, formatting, and intent rather than exact wording.</p>
<h3 id="heading-using-ai-to-test-ai"><strong>Using AI to Test AI</strong></h3>
<p>A powerful emerging approach is to let AI help validate AI.</p>
<p>Imagine this workflow:</p>
<ol>
<li><p>The test framework triggers an AI request in UI/API.</p>
</li>
<li><p>The product’s AI generates a response.</p>
</li>
<li><p>The test sends both the request and response to <strong>another AI</strong> (validator model) with a prompt:<br /> <strong>“Does this response make sense for the given request?”</strong></p>
</li>
<li><p>The validator AI returns a pass/fail verdict with reasoning.</p>
</li>
</ol>
<p>This “AI validating AI” approach simulates user judgment much better than brittle assertions. Importantly, the <strong>validator should be a different model</strong> (or configured differently) to avoid bias and rubber-stamping.</p>
<h3 id="heading-a-sandbox-project-example"><strong>A Sandbox Project Example</strong></h3>
<p>To explore this, I built a simple <strong>Trip Planner Sandbox</strong>:</p>
<ul>
<li><strong>Frontend</strong>: React + TypeScript + Vite form asking travel preferences.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759444085385/0bbd788e-4afe-4739-bd5b-b8700ced2ced.png" alt class="image--center mx-auto" /></li>
<li><strong>Backend</strong>: Express + OpenAI SDK, prompting the AI to generate a trip plan:</li>
</ul>
<pre><code class="lang-ts">app.post(<span class="hljs-string">"/api/plan-trip"</span>, <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-keyword">const</span> { preferences } = req.body;

  <span class="hljs-keyword">const</span> completion = <span class="hljs-keyword">await</span> openai.chat.completions.create({
    model: <span class="hljs-string">"gpt-4o-mini"</span>,
    messages: [
      {
        role: <span class="hljs-string">"system"</span>,
        content:
          <span class="hljs-string">"You are a helpful travel planner. Always respond in strict JSON with keys: destination (string), highlights (array of 2–4 strings), season (string: best season to visit)."</span>
      },
      {
        role: <span class="hljs-string">"user"</span>,
        content: <span class="hljs-string">`Plan a trip based on these preferences:\n<span class="hljs-subst">${<span class="hljs-built_in">JSON</span>.stringify(preferences, <span class="hljs-literal">null</span>, <span class="hljs-number">2</span>)}</span>`</span>
      }
    ],
    response_format: { <span class="hljs-keyword">type</span>: <span class="hljs-string">"json_object"</span> }
  });

  <span class="hljs-keyword">const</span> content = completion.choices[<span class="hljs-number">0</span>]?.message?.content;
  res.json({ success: <span class="hljs-literal">true</span>, data: <span class="hljs-built_in">JSON</span>.parse(content!) });
});
</code></pre>
<p>and return structured JSON:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"destination"</span>: <span class="hljs-string">"Appalachian Mountains"</span>,
  <span class="hljs-attr">"highlights"</span>: [<span class="hljs-string">"Hiking"</span>, <span class="hljs-string">"Camping"</span>],
  <span class="hljs-attr">"season"</span>: <span class="hljs-string">"Fall"</span>
}
</code></pre>
<ul>
<li><strong>Tests</strong>: Playwright with Page Object Model and a custom <strong>AI Validator utility</strong> that checks whether the trip plan is plausible:</li>
</ul>
<pre><code class="lang-ts"><span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">validateTripPlan</span>(<span class="hljs-params">
  prefs: Preferences,
  plan: TripResult
</span>): <span class="hljs-title">Promise</span>&lt;</span>{ pass: <span class="hljs-built_in">boolean</span>; reason?: <span class="hljs-built_in">string</span> }&gt; {
  <span class="hljs-keyword">const</span> systemPrompt = <span class="hljs-string">`
    You are a validator. Respond in JSON with:
    - pass (boolean)
    - reason (string if not valid)
  `</span>;

  <span class="hljs-keyword">const</span> userPrompt = <span class="hljs-string">`
    Request: <span class="hljs-subst">${<span class="hljs-built_in">JSON</span>.stringify(prefs)}</span>
    Response: <span class="hljs-subst">${<span class="hljs-built_in">JSON</span>.stringify(plan)}</span>

    Rules:
    1. Destination must be plausible.
    2. Highlights: 2–4 relevant activities.
    3. Season: best time to visit.
  `</span>;

  <span class="hljs-keyword">const</span> completion = <span class="hljs-keyword">await</span> openai.chat.completions.create({
    model: <span class="hljs-string">"gpt-4o-mini"</span>, <span class="hljs-comment">// Or a different model for validation</span>
    messages: [
      { role: <span class="hljs-string">"system"</span>, content: systemPrompt },
      { role: <span class="hljs-string">"user"</span>, content: userPrompt }
    ],
    response_format: { <span class="hljs-keyword">type</span>: <span class="hljs-string">"json_object"</span> }
  });

  <span class="hljs-keyword">return</span> <span class="hljs-built_in">JSON</span>.parse(completion.choices[<span class="hljs-number">0</span>]!.message!.content!);
}
</code></pre>
<p>Then, in a test case, we can utilize this AI validation utility to validate our AI generated response:</p>
<pre><code class="lang-ts">test(<span class="hljs-string">"AI trip suggestions make sense"</span>, <span class="hljs-keyword">async</span> ({ page }) =&gt; {
  <span class="hljs-keyword">const</span> tripPage = <span class="hljs-keyword">new</span> TripPlannerPage(page);
  <span class="hljs-keyword">await</span> tripPage.goto();

  <span class="hljs-keyword">const</span> prefs = {
    preference: <span class="hljs-string">"mountains"</span>,
    budget: <span class="hljs-string">"$1000"</span>,
    companions: <span class="hljs-string">"family"</span>,
    climate: <span class="hljs-string">"mild"</span>,
    duration: <span class="hljs-string">"1 week"</span>
  };

  <span class="hljs-keyword">await</span> tripPage.fillPreferences(prefs);
  <span class="hljs-keyword">await</span> tripPage.clickOnSubmitButton();

  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> tripPage.getTripResult();
  <span class="hljs-keyword">const</span> validation = <span class="hljs-keyword">await</span> validateTripPlan(prefs, result);

  expect(validation.pass, validation.reason).toBeTruthy();
});
</code></pre>
<p>The validator enforces rules like:</p>
<ul>
<li><p>Destination must be a real, plausible place.</p>
</li>
<li><p>Highlights must be 2–4 relevant activities.</p>
</li>
<li><p>Season must reflect the best time to visit. This sandbox showed how we can integrate AI-assisted validation into a modern test automation workflow.</p>
</li>
</ul>
<h3 id="heading-pros-of-this-approach"><strong>Pros of This Approach</strong></h3>
<ul>
<li><p>✅ <strong>Simulation of real user experience</strong>: Instead of checking raw text, we validate the meaning and relevance.</p>
</li>
<li><p>✅ <strong>Relevancy testing</strong>: Helps catch off-topic, nonsensical, or unsafe AI outputs.</p>
</li>
<li><p>✅ <strong>True E2E flow</strong>: Covers request, AI generation, and user-facing response validation.</p>
</li>
</ul>
<h3 id="heading-cons-and-limitations"><strong>Cons and Limitations</strong></h3>
<ul>
<li><p>⏳ <strong>Execution time</strong>: Each test requires at least two AI calls (generation + validation).</p>
</li>
<li><p>⚙️ <strong>Complexity</strong>: Needs a fully integrated product and supporting infrastructure for AI-based assertions.</p>
</li>
<li><p>💸 <strong>Maintenance cost</strong>: Teams must ensure consistency, handle evolving AI behavior, and manage test flakiness.</p>
</li>
<li><p>📉 <strong>Flakiness</strong>: Sometimes the AI Validator can be overly strict. Test data must not only be valid but also realistic. As models evolve, results may shift due to improved reasoning. For example, during one test run, what seemed like a valid input was rejected with the following message:</p>
<ul>
<li><em>“Error: Validation failed. Reason: The budget might be insufficient for a family trip to Maui, especially during peak season without any accommodation or transport details provided.”</em></li>
</ul>
</li>
</ul>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>AI-assisted testing of AI systems isn’t a silver bullet, but it’s a powerful <strong>complement to traditional automation</strong>. While it introduces cost and complexity, it can provide <strong>valuable insights into user experience and AI response quality</strong> - things that deterministic tests alone cannot cover.</p>
<p>For many teams, the best use case is to integrate this approach into <strong>nightly or exploratory test runs</strong>, where slower but more meaningful validations are acceptable. Ultimately, if your product heavily relies on AI for user-facing functionality, this type of testing may be worth the investment.</p>
<h3 id="heading-resources-amp-connect">🔗 Resources &amp; Connect</h3>
<p>If you’d like to explore the full <strong>Trip Planner Sandbox</strong> project, the code is available here:<br />👉 <a target="_blank" href="https://github.com/shenderov/trip-planner-sandbox">GitHub Repository</a></p>
<p>I’d also love to connect and discuss more about <strong>AI testing, automation, and software engineering</strong>.<br />👉 <a target="_blank" href="https://linkedin.com/in/shenderov">Connect with me on LinkedIn</a></p>
]]></content:encoded></item></channel></rss>