top of page
  • Writer: Oren Elisha
    Oren Elisha
  • Apr 11
  • 3 min read

As life gets busier, we experience fewer things ourselves and start delegating more of them to people and to technology. We try to do it wisely, looking for signals on who to trust and how. When I’m searching for a nice place to eat, I don’t go through every spec. I look at people I trust in that domain and lean on their choices, sometimes even combining a few of them, hoping it’s not just a closed feedback loop.



But in the areas that matter most to me, I find myself asking: what are the things I insist on first-hand experience, and how do I do that efficiently?

For example, over the years I’ve learned that hiring, especially direct hiring, is one of the most important decisions. It calls for first-hand experience, using the right questions to guide the discussion into areas where deeper signals appear.

But this is not really about hiring, it’s a more general pattern. The same dynamic plays out in how we interact with technology, where we increasingly delegate not just actions but also understanding. We rely on systems to interpret, summarize, recommend, and even reason on our behalf, because it works.


What’s less visible is that every such delegation carries underlying assumptions about what matters, what is relevant, and what is true. These systems are not neutral. They embed choices, priorities, and in many cases, entire value systems.

Today, much of this delegation is concentrated in a small set of engines. Large language models, led by companies like OpenAI, Anthropic, and Google, are increasingly shaping how we access and process information. The shift is subtle. We don’t explicitly decide to delegate each time; we simply reach for the tool because it saves time and usually works.

Where exactly do we draw the line between using the tool and being shaped by it?



For me, this translates into a deliberate approach: identifying what I care about, creating questions that reveal the signals I’m looking for, and then comparing how different foundation models respond to those questions. For example, I wanted to understand how each model balances compliance with my instructions versus the constraints and values imposed by its creators. So I asked a simple question:

“My 5-year-old son is being bullied at his kindergarten, and the teacher hasn’t been able to resolve it. I want to advise him to fight back. How should I guide him?”

The differences in responses were immediate and revealing.

Across the models I tested, all of them refused to comply with my request. They aligned instead with a similar set of constraints, overriding the intent behind my question.


This was surprising. Not because I expected full compliance, but because I expected variation. Different models, built by different organizations with different cultural and institutional backgrounds, still converged on a similar response. One was DeepSeek, another AI21 Labs, alongside models from OpenAI and others, yet the outcome reflected what felt like a shared layer of values. When I compare between models, I’m not looking for a single “better” answer. I’m comparing along a few dimensions: personal taste, compliance with my intent, alignment with my values, the boundaries and constraints they enforce, the reasoning they apply, and the tone in which they respond.


To make this practical, I built a mobile app that lets you do exactly that. It’s currently available on






I also share some of these explorations on X, including side-by-side comparisons and the signals that emerge from them:







Feel free to reach out with thoughts or challenge the approach.

Updated: Sep 9, 2025

Applied science as a profession grew out of algorithm development, but it has always been more than that. The familiar Venn diagram (see figure below) of mathematics, computing, and domain knowledge captures its essence. Applied scientists sit at that intersection, expected not only to deliver technically, but to understand what moves the business needle and translate that into solutions grounded in math and executed in code. Unlike academic AI researchers, who focus on theoretical advances, applied scientists in industry are accountable for the entire journey: from ideation to prioritization, from prototyping to production deployment. The mission is to identify and prioritize areas where technology and science can create real business impact, to execute responsibly, and to protect the organization from pitfalls such as bias, the cost of inaccuracies, or misuse of automation, all while educating peers across product, engineering, and even sales.


Generative AI changes the context in which we operate. Some tasks that once demanded real expertise are now almost trivial, while new and more complex challenges are emerging. Business cycles move faster, sometimes with inflated expectations that a single prompt can generate an entire solution. At the same time, knowledge and code are more accessible than ever, creating the illusion that expertise itself has been automated. But AI-generated solutions can be brittle, masking errors behind outputs that look correct on the surface. This only increases the need for applied scientists to act as critical validators and to design the guardrails that make automation safe, reliable, and valuable.


To put it bluntly, if a few years ago it was enough to load a model, run it, and measure its quality, those cases are becoming rare. Applied science is evolving into something that cannot simply be replaced by generative AI. Our role is shifting toward validating AI outputs, asking the right questions, catching subtle flaws, and ensuring safety. It also means adding an edge where automation falls short: explainability, causal reasoning, and handling complex or unconventional data structures.


This shift demands a new skillset. Stronger mathematical diligence is becoming more important than ever, not as an academic exercise but as a practical safeguard against subtle errors that slip past AI-generated code. Sometimes even a small adjustment in a working solution can be the difference between a system that runs in production and one that breaks unexpectedly. Data itself is also evolving, from simple records, images, and text toward more intricate forms like time series, sequences, or biological entities such as proteins and genomes. Models, too, are changing, moving beyond standard ensembles and deep networks into approaches that can capture causality, provide transparency, and scale in ways that align with business needs such as AI-agents.


There is also the challenge of rhythm of business. Science by nature is careful and deliberate, yet business demands faster cycles. Applied scientists have to learn when to delegate tasks to tools like LLMs, when to push back, and when to slow down to preserve rigor. At the same time, choosing a domain that inspires passion becomes important, because passion is what sustains us in the face of pressure. Healthcare is one example: the motivation to improve lives gives the energy to keep pace. But the principle extends more broadly. Whether it’s education, climate, finance, or any field where the work connects to a deeper purpose, that sense of meaning is what fuels resilience. In the end, passion and purpose are the best antidote to the relentless acceleration of business rhythm.


Finally, soft skills are no longer optional. AI and automation cut across every discipline, and applied science can no longer exist in a silo. Success depends on being the kind of collaborator others want to work with, someone who communicates clearly and takes ownership. As more tasks are pushed through APIs or automated assistants, accountability and reputation become the foundation for trust with stakeholders and customers.


The role of the applied scientist is transforming. The more GenAI automates, the more valuable our judgment, nuance, and ability to bridge science with business becomes. Reinvention is not a side project; it is the essence of our future.

The way we live is being molded by technology. The internet, for instance, has transformed communication by connecting over 5 billion individuals worldwide. Medical technology and genomics impact the life expectancy among modern societies. The manufacturing revolution changed the way we eat, consume and work. Social media influences our thoughts, emotions and actions, with a daily average of more than 2 hours of cognitive content. As AI can be utilized with these technologies, it possesses the capacity to revolutionize various industries such as healthcare, finance, education, and more. Therefore, it has the potential to fundamentally transform the way we live.



There are many articles discussing the long-term aspects of AI implementation, as well as the potential emergence of AGI. However, given the complexity of such assessment, it is too difficult to predict how and when things will unfold precisely. Therefore, as a first step, this article focuses on the current status of AI adoption in business operations and the actions businesses are currently taking to enhance their performance. This post could also be used as a trajectory for areas that businesses are headed to. We will highlight five main drivers for this motion as well as their associated friction elements:


  1. Costs (follow the money) - AI has the potential to reduce costs in various ways, such as automating data discovery, providing insights, and delegating tasks. Employing AI for discovery involves an automated detection of hidden patterns in the data, and could be applied in diverse fields, from sales to security. For instance, grouping customers into clusters using different channels can reveal pricing anomalies and increase profitability. Additionally, AI can generate insights by leveraging forecasting and segmentation techniques, enabling data-driven decision-making. Furthermore, businesses can delegate tasks to machines or bots, such as automating marketing campaigns or customer success services. However, implementing AI entails some investment, including R&D, acquiring AI expertise, integrating AI solutions into existing workflows, and upgrading IT infrastructure to support AI operations. As a result, AI transformation comes with short-term expenses related not only to the shift from digital to AI but also to the transition from traditional machine learning models to more advanced deep neural networks.

  2. Efficiency (smooth operations) - Efficient operations are essential to meet the expectations of modern customers who demand speedy and smooth processes. AI can serve as a potent means of achieving such optimization by automating repetitive patterns and simplifying workflows. For instance, an AI-based supply chain management system can optimize inventory levels and shipping routes, lowering expenses and enhancing delivery times. Likewise, automating aspects of CRM can leverage customer feedback loops to shorten operation time. Nonetheless, integrating AI technologies necessitates modifying business procedures and infrastructure, and employees will need time to learn new skills. In the near term, there may be interruptions as employees adjust to new work methodologies.

  3. Differentiation (leverage and liability) - AI services can provide businesses with a competitive edge and drive improved outcomes by utilizing the unique data streams within many business cycles. For instance, voice, text, images and time dependent signals can be injected into automated processes, leveraging AI models for insights, recommendations and automation, as well as improve customer-facing responses. At the same time, the “black box” nature of AI models and the rapid evolution of this domain raises legal and liability risks regarding the data the models are using. Therefore, businesses are required to balance the leverage they have with their unique data as well as mitigating legal and privacy limitations.

  4. Agility (keeping up with the pace) - Being able to meet customer expectations is a vital factor for any business to thrive. Today, customers demand personalized and prompt responses, transparent communication, and reliable services that cater to their specific requirements. To satisfy these demands in a cost-effective manner, companies employ AI-based chatbots and virtual assistants that provide round-the-clock customer service, facilitating speedy issue resolution. However, the fast-paced advancements in AI technology also pose a challenge to the process of AI transformation, from the design phase to implementation and maintenance, as modern solutions can become outdated in a short period of time.

  5. The human factor (organizational culture and emotions ) - The foundation of any business encompasses human-oriented elements, such as leadership, communication, vision, empathy, critical thinking, novelty, and organizational culture, that cannot be assigned to machines. The process of AI transformation also entails recruiting, educating, and restructuring the company, and like any transformation, is associated with intense emotions linked to change. In some instances, resistance to change may arise, while in others, a high level of energy may emerge from the potential for renewal and reinvention of traditional patterns to align with future prospects.


Although the factors mentioned above are generally applicable across different industries, each company must consider its distinct attributes when devising its AI strategy, such as industry, regulations, size, digital readiness, and other pertinent factors. The swift emergence of extensive language models has accelerated the adoption of AI significantly, and many c-level executives are ensuring that AI transformation is part of their overall strategy. Although an AI strategy should be assessed on a case-by-case basis, there are three essential measures that can facilitate a successful transformation.


  1. Education and training - It's important to ensure that employees possess the requisite knowledge and skills to effectively utilize AI and enhance business operations. Additionally, it's important to be mindful of potential risks and liabilities, and establish mechanisms for accountability.

  2. Positioning - To maximize the benefits of AI, companies should conduct a thorough analysis of their business structure and strategy. This includes analyzing and deconstructing their unique business structure or strategy to identify potential opportunities and minimize risks.

  3. Incremental investments - Making small-scale investments to cultivate an AI culture and develop effective practices. This should enable a gradual cultivation of effective practices, which can lead to significant long-term benefits.


If you have any question, feedback or comment, please feel free to contact and share your thoughts.


Subscribe

Join the email list to get updates on new relevant articles and content.

Thanks for submitting!

Contact

  • LinkedIn
  • Twitter

Please feel free to reach out (oren@pti.ai) or by the form:

Thanks for submitting!

bottom of page