my tech perspectives
a space for me to share reflections on the rapidly evolving technology landscape, recap industry events, and explore emerging ideas shaping the future
You’re so predictable
Maybe language isn’t as deeply meaningful as we think. As LLMs grow more sophisticated, language is becoming more like a commodity than ever. Sitting in a lecture recently, a professor from NYU was discussing the current state of GenAI and LLMs, how they were developed, what they can offer us, and why they are powerful. LLMs are excellent at predictions, guessing the next word that would most likely appear based on the patterns it has seen in the past – but this begs the question, why are LLMs so good at that? Because humans and the way we use and interpret language is highly predictable. I’d argue that humans like to think they’re incredibly special, and one of the ways we’ve evolved to be so dominant on earth is through the power of language and communication. LLMs have stripped this differentiation away from us.
In this new age of LLMs, what makes humans special? I’d like to explore the implications of this commoditization of language and knowledge in three areas: creativity, individuality, and agency.
Creativity
I’ve been thinking lately about a photography class I took in college where we spent a week trying to unpack what creativity and originality means. We all like to believe we are original; that all the things we think and create are our own. In reality, all of our creativity is influenced by works of art, songs, or other mediums that we’ve seen before. Your own personal experiences, environment, and exposure shape who you are and what you think. It’s nearly impossible to create something so completely and utterly original. That’s why so many songs have familiar tunes and sequences of notes. We’ve all been listening to a song and thought, that sounds familiar, only to realize it’s a different song with nearly the same melody. Artists of all kinds have heard, read, seen, and experienced various forms of art that influence what they ultimately create. So is it really original? Or is it just a copy of something they’ve seen before? Is anything truly original?
While it can seem like LLMs are generating new threads or outputs, this is just on the surface. LLMs have to see something first - their output is not inherently unique and they tend to lack character. This is how they are designed, lacking adaptability and a four dimensional (or 13 dimensions if you’re into the quantum mechanics) model of the world. Everything is based on associations. They aren’t truly ‘creative’ in the same way we think about generating new art or art forms.
Individuality
Individuality is an interesting concept as it relates to generative AI models. As LLMs are updated and released, you could say that each iteration of an LLM is unique, technically having its own logical processing that’s different than the one before. The training data could be considered its ‘experiences’, influencing the content that it creates. It’s also learning a certain individual through interactions, adding particular characteristics to its output.
However, these models all start from the same base code. While it can cater itself to different people or ‘profiles’ that it interacts with, it ultimately starts from the same place. This is markedly different from humans. While even identical twins can grow up in the same environment and household, they turn out to be completely different individuals. They may have the same ‘baseline’ that a model does, but twins eventually have different thought processes and patterns that aren’t replicable. Their experiences in the world shape them into different human beings who have unique thoughts that could not be replicated. LLMs in the background are performing the same mathematical calculations based on their training, no matter the context you feed into it. There is no sense of individuality to a generative AI model.
Agency
This all leads me back to the question of what makes a human special – I’d like to think it’s our agency. AI and LLMs will eventually be better than us at many skills: math, knowledge retention, speed of output, etc. Their capacity is simply incomparable to what a human brain can retain. Their seemingly infinite memory, computational capacity, and processing speed are beyond compare. However, humans have the ability to make our own choices. We get to decide what influences we want to lean into when creating something original. We make decisions when we want, while LLMs produce the output from those directional decisions.
Creative agency is incredibly important now more than ever when I think about writing. LLMs are good at generating stories and narratives, but (at least for now) they don’t have the agency to direct what they produce - they must be prompted to guess the next best sequence of characters.
Looking Ahead
Humans are individually unique in a way that LLMs cannot emulate. They have inherent limitations based on the way they are designed as prediction models (primarily just computing dot products which is a conversation for another time). For now, I continue to hold the belief that a human’s experience in the real world and their agency will continue to be the differentiating factor, regardless of how much faster LLMs can process and predict language.
I’ve been reflecting on all of this as I begin this blogging journey. It would be incredibly easy to use LLMs to generate the content I share - write my thoughts and feelings in more coherent ways, edit my work into more grammatically correct structures, influence my ideas and conclusions. And while there are benefits to some of these actions, I feel more strongly than ever that is important to utilize the tools at hand with caution. Nothing is exciting about another essay written and edited by AI. I am confident there will be so much of this in our future. I want to maintain this space as a place for me to write my own original thoughts and feelings, even if it could be predicted with great accuracy by an LLM. A model can’t emulate my thoughts and feelings, my perspective on the world, the places and people I’ve interacted with that have left an impact. Only I can employ my agency and individuality to create written thoughts, and share them on my own timeline.
my thoughts on Kiro
A few weeks ago, at the NYC Summit, AWS announced the launch of Kiro – a new IDE for AI-native and agentic development. What stood out most to me was not the software itself, but the workflow. AI is changing the way we think and how we work, and it would only be natural that the same would be true for the tools we use to code. Kiro is just at the start of transforming how we interact with software tools, and I want to share my thoughts.
I’ve been hearing a lot about vibe coding these days, from customers, coworkers, and friends alike (for those unfamiliar, vibe coding is a style of software development where AI agents generate the majority of the actual code, less hands-on-keyboard work for coders). As someone who does not come from a traditional software development background, Kiro is the perfect tool to add more structure to vibe coding, bringing up concerns or ideas I hadn’t considered, or helping me brainstorm how I might improve an idea before coding even begins.
Before I get into the actual workflow of the tool for those interested, I want to highlight a key feature of Kiro. A hook is an automation tool that performs a task based on an action done by the user. The simplest example of this is hitting save while coding (you can think of this similarly to saving a word document). You can create a hook based on ‘saving’ to automatically update documentation or generate a new unit test based on changes in the file. These tasks can be tedious for developers and devops teams to maintain and can slowdown deployment of production-ready code.
A perfect use case for Kiro came out of a conversation with, of all people, my father. He recently started working at a startup called Chainguard, a container security company. Chainguard provides secure container images that are free from common vulnerability and exposures (CVEs). Picture this: every time a developer saves a file, Kiro automatically checks the code against the provided Chainguard images for malicious code. This hook designed as a security guardrail is exactly where Kiro shines.
I do want to spend time exploring the details of Kiro’s workflow, but I’ll start by touching on how the development process of creating an AI agent works today. Most developers who craft agents send large, complex prompts to LLMs that process and output an agent workflow. This can be problematic for a lot of reasons, the main one being a lack of visibility into how the agent will be making decisions. Transparency and explainability are key concerns. LLMs make a lot of assumptions based on gaps in prompts and how it was trained, both of which can be hard to identify. To combat this, let’s walk through Kiro’s approach, what they are calling ‘spec (specification) driven development’.
Start by describing the agent decisions or workflow you are looking to solve. Where a tool like Cursor (or any other AI coding assistant) would begin generating code immediately, Kiro takes in your description and returns a set of requirements in natural language that it thinks will meet your description. You chat back and forth, giving feedback about the requirements you like, and where you want adjustments made.
Once you’re satisfied with the requirements, Kiro will begin to build a design doc, generating technical specifications, architecture diagrams, workflows and subtasks that will be used to achieve the agent logic you described. Again, this is an iterative process. Kiro will take feedback, clarifying any assumptions that it may have made incorrectly.
Finally, task creation begins. Kiro works in autopilot mode to create the code that will characterize your agentic workflow. This can be stopped at any time, and since you iterated beforehand with requirements and specifications, Kiro can adjust its output without having to start over from scratch.
Simple conceptually, but groundbreaking in terms of how a developer interacts with an AI coding assistant today. All the answers aren’t needed from the start - Kiro can help parse out the most important architecture requirements and how it will fit into a larger logical flow as you go. By deep diving into details, the agent coding experience is more enterprise-grade and production ready.
As I continue to work with my customers on their applications, I’m excited to see how Kiro can support their AI development process and meet them where they are. While Kiro remains in private beta, we are eager to get more feedback on the tool, make it faster, and support more LLM models. 
—
Learn more about Kiro here
WSJ’s Future of Everything Conference Recap
Earlier this summer, I had the opportunity to attend WSJ’s Future of Everything Conference, which focused on technological innovation across industries like aviation, finance, healthcare, culture, and infrastructure. While broad in scope, the core themes discussed throughout the conference centered on accessibility and democratization—largely driven by the productivity gains expected from generative AI. This technology is advancing at an unprecedented pace, with implications for both the near and long term—faster than any previous technological revolution.
The first and most AI-focused topic was enterprise infrastructure. The conference featured a co-founder of Groq and the COO of OpenAI, both optimistic about the opportunities ahead. Groq develops inference chips using a tensor streaming processor that delivers ultra-low latency and deterministic inference performance. OpenAI, in contrast, is focused on offering best-in-class LLMs through API layers that support a wide variety of use cases. While their approaches differ, both speakers predicted major implications for the labor market. Groq’s founder highlighted three key outcomes: people will become more productive, more businesses will emerge due to increased accessibility, and goods may become cheaper as production costs decline. In theory, this could even allow people to retire earlier. Predictions aside, one thing is clear: the workforce is on the brink of significant change. Increased productivity and broader access to information will empower more people to pursue new business ideas without needing initial investment.
Beyond infrastructure, healthcare was the most prominently featured industry at the conference. It’s a natural fit for AI applications focused on improving efficiency, accuracy, and outcomes—particularly in drug discovery and access to care. Innovations in clinical trials aim to make them safer and more effective. Scientists have spent decades working on treatments for diseases like cancer and other currently incurable conditions. AI has the potential to accelerate that progress, enabling more personalized care, better R&D insights, and more accurate predictions.
A final recurring theme across sessions was the democratization of technology—how platforms, transportation, and education are empowering people to do more with fewer resources. In aviation, modernization and sustainable fuels promise to make flying more accessible and climate-conscious. In fintech and cybersecurity, automation and AI are expected to drive new efficiencies. Digital platforms are also evolving toward more community-driven experiences, shaped by both content creators and active users. This shift brings new models of crowdsourced trust, content moderation, and participation. From reducing the paradox of choice on dating apps to rethinking how online opinions are verified and rewarded, companies are leaning into transparency, experimentation, and culture-first innovation.
A quote that captures the moment well:
 “Progress is exponential: we tend to overestimate the short term and underestimate the long term.”
One thing is certain—change is inevitable in today’s technological and social environment. We’re only beginning to tap into AI’s potential, and there’s much more to learn about how it will reshape society, culture, and the workplace. AI will soon be embedded into nearly every aspect of technology. Embracing this shift and educating ourselves is the first step toward understanding its full impact.
Something I heard recently helped frame this rapid evolution of AI and the commoditization of knowledge: while knowledge and technology may become commoditized, agency—the human capacity to choose and act—will remain our defining trait.