Saturday, December 2, 2023

A comprehensive overview of generative AI and LLMs' trends, use cases, and future implications II. - Engineering and development insights

7 weeks (from 4.9. to 22.10.2023) in the world of Large Language Models and Generative AI tools, this time more focused on the engineering side:


Prompt engineering:

Parallel processing in prompt engineering: the skeleton-of-thought technique.

Unlocking reliable generations through Chain-of-Verification - a leap in prompt engineering.

LLMOps: production prompt engineering patterns with Hamilton.

Crafting different types of program simulation prompts - defining the new program simulation prompt framework.

Some kick-ass prompt engineering techniques to boost our LLM models.

And other prompt engineering tips, a neural network how-to, and recent must-reads.


AI Development and Engineering:

The team behind GitHub Copilot shares its lessons from building the app.

Amazon Bedrock for building and scaling generative applications is now generally available.

Experience from building generative AI apps on Amazon Web Services, using Amazon Bedrock and SageMaker.

A guide with 7 steps for mastering LLMs.

Key tools for enhancing Generative AI in Data Lake Houses.

An introduction to loading Large Language models.

Introduction to ML engineering and LLMOps with OpenAI and LangChain.

MLOps and LLM deployment strategies for software engineers.

Modern MLOps platform for Generative AI.

Leveraging the power of LLMs to guide AutoML hyperparameter searches.

LLMs demand Observability-Driven Development.

LLM monitoring and observability — a summary of techniques and approaches.

How to build and benchmark your LLM evals.

A step-by-step guide to selecting and running your own generative model.

Google Research: Outperforming larger language models with less training data and smaller model sizes - distilling step-by-step.

Google Research: Rethinking calibration for in-context learning and prompt engineering.

Apache Kafka as a mission-critical Data Fabric for GenAI.

Training ChatGPT on your own data.

Hugging Face's guide to optimizing LLMs in production.

Hugging Face is becoming the "GitHub" for Large Language Models.

Building microservice for multi-chat backends using Llama and ChatGPT.

Connect GPT models with company data in Microsoft Azure.

Tuning LLMs with MakerSuite.

Fine-tuning LLMs: Parameter Efficient Fine Tuning (PEFT), LoRA and QLoRA.

How to train BERT for masked language modeling tasks.

Extending context length in Large Language Models.

Conversational applications with Large Language Models understanding the sequence of user inputs, prompts, and responses.

Using data lakes and Large Language Models in development.

How to build an LLM from scratch.

LLM output parsing: function calling vs. LangChain.

Enhancing the power of Llama 2: 3 easy methods for improving your Large Language Model.


Keeping LLMs relevant and current - Retrieval Augmented Generation (RAG).

Build and deploy Retrieval Augmented Generative Pipelines with Haystack.

Why your RAG is not reliable in a production environment.


QCon San Francisco: 

Unlocking enterprise value with Large Language Models.

A modern compute stack for scaling large AI, ML, & LLM workloads.

Saturday, November 25, 2023

A comprehensive overview of generative AI and LLMs' trends, use cases, and future implications I. - Business, technology trends and applications

7 weeks (from 4.9. to 22.10.2023) in the world of Large Language Models and Generative AI tools:


AI in Business and Technology Trends:

How OpenAI turned LLMs into a mainstream success.

Oracle outlines a vision for AI and a cloud-driven future.

Enterprise SaaS companies have announced generative AI features, threatening AI startups.

How Generative AI is disrupting data practices.

Data Provenance in the age of Generative AI.

Is ChatGPT going to take data science jobs?

40% of the labour force will be affected by AI in 3 years.

And Gartner says: 

55% of organizations are in piloting or production mode with Generative AI.

CIOs must prioritize their AI ambition and AI-ready scenarios for next 12-24 months.

More than 80% of enterprises will have used Generative AI APIs or deployed Generative AI-enabled applications by 2026.

60% of seller work to be executed by Generative AI technologies within five years.


AI Applications and Use Cases:

Large Language Models in real-world customer experience applications.

Five generative AI use cases companies can implement today.

Five use cases for CFOs using generative AI.

Revolutionizing business automation with generative AI.

Redefining conversational AI with Large Language Models.

Pros and cons of LLMs for bad content moderation.

Generative AI on research papers using the Nougat model.

Document topic extraction with Large Language Models and the Latent Dirichlet Allocation (LDA) algorithm.

Using AI to add vector search to Cassandra in six weeks.

Monday, November 13, 2023

Large Language Models and other AI tools in software development (from 4.9. to 22.10.2023)

7 weeks (from 4.9. to 22.10.2023) in the world of Large Language Models and other AI tools used for software development:

List of five free AI Tools for programmers (Amazon CodeWhisperer, ChatGPT, CodeGeeX, GitHub Copilot, Bugasura).

A more detailed comparison of AI tools for programmers - the same as above, except Bugasura - another tool - Replit - is mentioned.

And here are 5 ChatGPT alternatives for code generation (Tabnine, Kite, Codota, DeepCode, GitHub Copilot).

Comparing ChatGPT with Bard AI - for software development.

GitHub Copilot Chat in open beta - now available for all individuals in Visual Studio and VS Code.

Couchbase has introduced generative AI capabilities for SQL into Database as a Service (Couchbase Capella)

MetaGPT - ChatGPT-powered AI assistant turning text into ChatGPT-based apps.

AI Assistant for IntelliJ-based IDEs update for October 2023.

Meta open-sources code generation LLM code Llama.

A new customization capability in Amazon CodeWhisperer generates even better suggestions (Preview).

Chatting with the GM of CodeWhisperer.

How is GenAI different from other code generators?

Is AI enough to increase your productivity?

The future of AI in software development - trends and innovations.

Reimagining application development with AI - a new paradigm.

The pitfalls of using general AI in software development - a case for a human-centric approach.

The challenges of producing quality code when using AI-based generalistic models.

Applying Large Language Models (LLM) to software requirements - creating a knowledge hub of business logic and copilot for faster development.

Chat with the Oracle DB - leveraging OpenAI models to query the Oracle DB, building a Text-to-SQL tool and testing it on a public dataset.

Leveraging GPT models to transform natural language to SQL queries by training GPT to query with few-shot prompting.

Retro-engineering a database schema with LLama2 - the idea here is to ask each LLM to analyze sample data and provide some insight into what the initial data scheme might look like.

‘Talk’ to Your SQL Database Using LangChain and Azure OpenAI.

AI-Driven microservice automation - use ChatGPT to build a MySQL database model, and add API Logic Server to automate the creation of SQLAlchemy model, react-admin UI, and OpenAPI (Swagger).

Tuesday, September 19, 2023

Combining software development principles and patterns with GRASP

As software development has evolved over the years, developers have formulated best practices, principles, and design patterns to create more robust and maintainable systems. In this article, we will explore the differences between software development principles and design patterns, and then dive into the GRASP principles. We will also discuss how GRASP principles are combining principles and patterns, and how they can help us to decide what to use. 

What is the difference between software development principles and design patterns? They are both essential concepts in software engineering, but they serve different purposes and operate at different levels of abstraction.

Software development principles are essential for creating high-quality software that is efficient, maintainable, and scalable. By following these principles, development teams can reduce costs, speed up development, and create better products that meet user needs. The principles are high-level guidelines or best practices that inform the software development process. They are often broad and language-agnostic, applying to various programming languages and paradigms. Principles are promoting qualities like maintainability, modularity, efficiency, and simplicity.

You should have a very good reason any time you choose not to follow principles.

Software development design patterns help developers create better software by offering efficient, reusable solutions to common problems that arise during software design. Design patterns lead to improved code quality, easier maintainability, and more effective communication among team members. They also promote scalability, and adaptability, and serve as valuable learning tools for developers. They are more concrete and detailed than principles, providing implementation guidelines for specific design challenges. They may be more closely tied to a particular programming paradigm (e.g., object-oriented, functional, etc.).

You should have a very good reason any time you choose to implement a pattern.

One specific set of principles from object design that offers an interesting way how to think about connecting the principles and patterns is GRASP (General Responsibility Assignment Software Patterns). They were described by Craig Larman in his book Applying UML and Patterns (1997).

It addresses specific development challenges and collects proven programming principles of object-oriented design, rather than being just a set of criteria for creating better software (like SOLID).

It is more a collection of best practices answers to frequently encountered coding challenges and serves as a guide for making design decisionsIt consists of nine principles, answering specific questions:

Creator

 - Who creates an object or a new instance of a class?

 - Assign the responsibility of creating an object to a class that is closely related to it. 

Related patterns are Factory Method or Abstract FactoryThese patterns encapsulate the object creation logic, assigning the responsibility of creating objects to a dedicated factory class.
They promote low coupling and high cohesion by keeping related object-creation logic within a single class.

Information Expert

 - What responsibilities can be assigned to an object?

 - Assign responsibility to the class that has the information necessary to fulfill it. 

Helps us to increase cohesion, promotes encapsulation, and promotes maintainability.

Low Coupling

 - How are objects connected to each other? How to support low dependency, low change impact, and increased reuse?

 - Design classes with minimal dependencies on other classes to promote modularity and improve maintainability and improve reuse potential. 

The Adapter is a design pattern that helps to achieve low coupling. It introduces an adapter class that acts as an intermediary between the incompatible interfaces, reducing the coupling between the classes.

Controller

 - How are input events delegated from the UI/API layer to the domain layer, including coordinating a system operation? 

 - Assign the responsibility of handling system events to a class that represents the overall system, a subsystem, or a use case. The controller is defined as the first object beyond the UI layer that receives and coordinates a system operation. This principle helps in managing system complexity by separating UI from business logic.

The related principle is Pure Fabrication. The related design patterns are, for example, Command and Facade, or Model-View-Controller (MVC).

High Cohesion

 - How are the operations of elements functionally related? How to keep objects focused, understandable, and manageable?

 - The responsibilities of a given set of elements should be strongly related and highly focused on a rather specific topic. Breaking programs into classes and subsystems, if correctly done, is an example of activities that increase cohesion. Classes with closely related responsibilities are more understandable, maintainable, and robust.


Polymorphism

 - How to handle alternative elements based on type? How to create pluggable software components?

 - Assign the responsibility of defining a common interface to related classes, allowing them to be used interchangeably. This principle supports reusability and flexibility. Polymorphic operations should be used instead of explicit branching based on type.

You can use the Strategy pattern here. It defines a common interface for the varying algorithms, allowing them to be used interchangeably. Polymorphism is achieved by using the common interface for different implementations.

Indirection

 - How to avoid a direct coupling between two or more elements and increase reuse potential?

 - Introduce an intermediate class to mediate between other classes, thus reducing coupling and promoting flexibility.

When you want to reduce coupling between a group of classes that communicate with each other, you can apply the Mediator pattern. This pattern introduces a mediator class that acts as an intermediary, managing the communication and relationships between the classes. Another related pattern is, for example, the already mentioned Adapter.

Pure Fabrication

 - How to achieve high cohesion and low coupling of problem domain elements?

 - Assign a responsibility to an artificial class, created just for the purpose of achieving High Cohesion and Low CouplingCalled a ‘service’ in domain-driven design, this class does not represent anything from the problem domain but is created to ensure High Cohesion and Low Coupling are achieved.

Protected Variations

 - How to design objects, subsystems, and systems so that variations in these elements do not impact other elements?

 - Design the system in a way that it is stable in the face of changes by encapsulating variations.

Protected Variations help us to achieve the Robustness of our system. 

You can use the Bridge pattern to ensure that changes in one class hierarchy don't affect another. This pattern separates an abstraction from its implementation, protecting the variations by encapsulating them within separate class hierarchies.


Understanding software development principles, design patterns, and GRASP principles is crucial for developers to create maintainable, scalable, and robust software systems. Applying GRASP principles helps in making better design decisions and potentially choosing the right design pattern for specific problems. 

For instance, if you need a way to create objects of different types based on input data, consider the Factory pattern (based on the Creator and Polymorphism principles).

By following these guidelines, developers can improve the overall quality of the code.

Tuesday, August 29, 2023

ChatGPT and ASCII art

Some time back, while experimenting with the ChatGPT service, I decided to try how proficient these language models are in dealing with ASCII art - a form of visual art that uses characters from the ASCII character set to create images and designs.
Presented below are the outputs generated by three distinct versions of the ChatGPT model that were available at that time, all in response to "write Hello World in ASCII art" prompt:

Legacy GPT-3.5:

Default GPT-3.5:


GPT-4:

As you can see, ASCII art presents a unique challenge for language models like ChatGPT. While these models excel at generating human-like text, their inability to effectively comprehend and create ASCII art remains evident.

The inability of ChatGPT models to handle ASCII art is attributed to their design, which is primarily centered around processing and generating text-based data. ASCII art, however, involves a visual and spatial understanding that goes beyond simple language patterns. The models cannot interpret the exact placement, sizing, and arrangement of ASCII characters to create complex visual designs.

The inability to effectively handle ASCII art exemplifies the gap between textual and visual comprehension within these models.

Friday, May 12, 2023

Vzkříšení II.

Sledoval, jak se okolní stavby začaly hroutit, jídelna se začala sesypávat s nimi a nakonec zmizela v hromadě sutin. Celou čtvrtinu základny zachvátily plameny a na oblohu se vzneslo množství dronů, pohybujících se v uskupeních, připomínající splašená hejna ptáků.

Základna měla tvar kříže a v jejím středu se nacházel kosmodrom. Každé rameno kříže tvořily dvě dlouhé plošiny s jeřáby a dalším vybavením. Při bližším pohledu se zřetelný tvar rozplynul a celá scéna připomínala obrovské mraveniště hemžící se nespočtem malých dronů a nepravidelných funkčních struktur.

Jedna strana základny byla vážně poškozená, plameny se mísily se sytě oranžovou září hvězdy, která se rozptylovala v atmosféře. Hvězda, která se na obloze jevila dvakrát větší než Slunce, visela nízko, těsně nad obzorem, a její tvar byl zkreslený refrakcí.

To bylo vše, co na záznamu mohl vidět. Jeho poslední vzpomínka před událostí byla jak usínal ve svém pokoji - mezi událostí a touto vzpomínkou byla devatenáctihodinová mezera. K dispozici byly i další videozáznamy, ale na nich byl zachycen pouze při chůzi po chodbách. Rozhodl se, že si je prohlédne později.

"Probudil jste se po šedesáti osmi hodinách, po tom, co se nám podařilo omezit následky výbuchu a zajistit dostatečné zdroje. Útok na základnu byl jedním z několika souběžných útoků v systému. K dalším incidentům došlo na druhé planetě a v důlních zařízeních ve vnějších oblastech. Ztratili jsme téměř veškeré spojení s hypernetem, zůstaly jen dva malé datové portály. Další připojení dorazí v příštích osmi měsících prostřednictvím nadsvětelných lodí z nejbližšího strongpointu," řekl hlas a dodal: "To je vše, co vám mohu prozatím říci."

Hlas byl jeho jediným zdrojem informací od probuzení. Když se odmlčel, obklopilo ho děsivé ticho a prázdnota. Nebylo nic vidět - jen prázdnota a jeho myšlenky.

Monday, May 8, 2023

Vzkříšení I.

Na tmavě modré hladině oceánu byly velkými vlnami zmítány malé úlomky ledu. Ze vzdálenosti pozorovatele se tyto vlny zdály poměrně bezvýznamné. Mrazivý exteriér osvětlovaly slabé oranžové paprsky světla vycházející ze zakrytého obzoru, skrytého hned za věžovou konstrukcí.

Obloha byla směsicí temně rudé a modré barvy, na níž se až na několik jemných cirrových útvarů téměř nevyskytovaly mraky. Chladné počasí bylo naštěstí od útulné jídelny odděleno průhlednou bariérou. V tuto hodinu v jídelně panoval čilý ruch, protože se lidé z okolních laboratoří shromáždili kolem dlouhého bufetového stolu, na kterém se nacházely dvě dlouhé porce in vitro masa - rybího a hovězího.

Korven se však soustředil především na venkovní scenérii, hleděl na oblohu a oběma rukama svíral hrnek teplého čaje. Byl ztracený v hudbě, která mu hrála v uších, nerozptylován žádnými zprávami z domova, jen tak chvíli odpočíval. Právě dojedl a dál pozoroval výhled, rámovaný staveništěm s četnými mechanickými rameny a jeřáby po obou stranách. Přímo pod sekcí, v níž se nacházela jídelna, kotvil osamělý trimaránový dron.

A pak...

Friday, May 5, 2023

Červí hnízdo I.

Matně osvětlené ledové krystalky, sotva viditelné, ležící roztroušené na zemi, se začaly pohybovat v důsledku pohybů pod zemí. Pohybující se půda odhalila podlouhlého, mnohonohého hmyzího tvora, připomínajícího bezbarvého ostnitého červa - nepojmenovaný druh, reprezentovaný pouhým genomickým záznamem v katalogu. Tvor se probudil z letargického sběru energie v podzemí, když ho vyburcovaly teplé poryvy větru. Vítr vycházel ze všudypřítomného karmínového obzoru, občas zastřeného mraky.

Tvor začal zhluboka dýchat čerstvý vzduch z atmosféry, atmosféry s nízkým obsahem kyslíku. Čerstvý vzduch naplnil jeho tělo mnoha nosy, rozmístěnými po celém těle. Zatímco právě stál, začaly se mu na zádech objevovat dva, v porovnání s jeho tělem poměrně malé, páry křídel.

Na obloze vyčnívala o něco jasnější červená tečka - červený trpaslík, dvojče domovské hvězdy planety a součást tohoto dvojhvězdného systému. Ačkoli byl momentálně osamocený, občas ho na obloze doprovázely planety putující ve vnějších částech soustavy. Tvor viděl tuto hvězdu jasněji, protože je jeho zrak posunutý dále do infračerveného spektra, kde červení trpaslíci vyzařují většinu své elektromagnetické energie. Tento typ vidění je zde užitečnější, pomáhá při hledání potravy a úkrytů, typicky míst vyzařujících teplo z podzemí, kde v chladnějších oblastech planety, kousek od terminátoru, sídlí většina živočichů.

Samci se instinktivně pohybují dále od terminátoru, směrem k chladnějším částem planety, aby si zde našli partnerku. Samice, větší a odolnější, žijí v nejchladnějších obyvatelných oblastech, tak, aby odrazovaly predátory, a přinutily samce k celoživotnímu výkonu, který takto vede k přirozenému výběru nejsilnějších jedinců. V rozlehlé zmrzlé pustině se navzájem poznávají za pomoci infračerveného blikání, částečně viditelného i ve viditelném světle. Samec je nakonec zkonzumován - ale stejný osud čeká i samice. Ty se vracejí zpět blíže k terminátoru, kde nakonec slouží jako potrava pro své potomky. V nejlepších případech se na sklonku svého života obětují dravci nebo mrchožroutovi a přenesou svá mláďata jako parazity na nového hostitele.

Cesta tohoto konkrétního samce byla však náhle ukončena. Noha dálkově ovládaného avatara ho rozdrtila a vyřadila z genofondu planety. Avatar ale pokračoval ve své rychlé cestě vydávajíc se hlouběji do temnoty odvrácené strany planety.

Friday, March 31, 2023

Digital Transformation (13. - 19.3.2023)

Two webinars:


MLOps tips and tricks - 75 code snippets

Using MLflow with ATOM to track machine learning experiments


Are expert systems dead? - a review of recent trends, use cases and technologies


ChatGPT - understanding what it can do and what it cannot do

Data as Achilles heel of AI - hindering AI's effectiveness with data silos and dark data

Monday, March 27, 2023

IT links (13. - 19.3.2023)

Custom constructors in Java Records give us greater control during object initialization by allowing for data validation and error handling.

Using Java Records as DTOs in Spring Boot 3 applications with Hibernate 6.

Reminding basics:

9 outdated ideas about Java

What do we know about JDK 20 and JDK 21 now? 

A new family of interfaces representing collections with well-defined sequences or orderings

Spring Data 3 introduces List-based CRUD repository interfaces as a replacement for the existing Iterable-based interfaces. 

Maven Reactor and a multi-module Maven project with inter-module dependencies.

How can a Java heap dump be obtained from an application running on a Kubernetes pod?

Comparing fluent interface design pattern with builder pattern.

RESTful Architecture cheatsheet

Adopting Contract-Driven Development

Troubleshooting and operating AWS resources from Microsoft Teams with AWS Chatbot.

Sunday, March 5, 2023

When is too less code too much?

Let's start with one very nice quotation from 'Java Performance: The Definitive Guide: Getting the Most Out of Your Code' book (available here):

... but the conflict here is that a small well-written program will run faster than a large well-written program. This is true in general of all computer programs, and it applies specifically to Java programs. The more code that has to be compiled, the longer it will take until that code runs quickly. The more objects that have to be allocated and discarded, the more work the garbage collector has to do. The more objects that are allocated and retained, the longer a GC cycle will take. The more classes that have to be loaded from disk into JVM, the longer it will take for a program to start. The more code that is executed, the less likely that it will fit in the hardware caches on the machine. And the more code that has to be executed, the longer it will take.

This summarizes a lot of stuff and I think that does not need any further explanation regarding mentioned cases.

Generally, in programming, there are principles like KISS (Keep It Simple and Stupid), DRY (Don't Repeat Yourself), YAGNI (You Aren’t Gonna Need It), YDNIY (You Don’t Need It Yet) trying to teach us to write as less code as possible. 

You could potentially say that the code is our enemy - quoting:

Code is bad. It rots. It requires periodic maintenance. It has bugs that need to be found. New features mean old code has to be adapted.

The more code you have, the more places there are for bugs to hide. The longer checkouts or compiles take. The longer it takes a new employee to make sense of your system. If you have to refactor there's more stuff to move around.

... and more.

And one quote, which we can use here also is (unfortunately, I don't have a source or author):

A perfect system is one that does not exist but still fulfills its function.


Benefits of writing less code

Less code is easier to understand, and it decreases cognitive load - the mental effort required to understand the code. When there is less code, it is easier to see the overall structure and flow of the program, and it is easier to understand how each part of the code is related to the rest. This makes it easier for others (and yourself) to work with and modify the code.

Less code is often more efficient. By writing less code, you can avoid unnecessary computations and reduce the number of function calls.

Less code is easier to maintain and debug. This makes it easier to find and fix problems when they do arise. For each new line of code, we need to verify it works properly. So, the lines that don't exist don't need tests, and they are not bringing new bugs. 

But you should not understand it in a way that you should write as less code as possible at all costs.


When to write more code

If you blindly apply the principles like YAGNI, you can hurt yourself later on. There are, of course, reasons why we can "waste" more lines of code. For example for better abstraction, more interfaces, even if it might look like they are not needed right now. Things decreasing coupling. Things that are self-explainable and improve better code organization. 

It is good to think in terms of smaller chunks of code, splitting the application (doesn't matter if we are going to call them functions or methods). This allows us to organize the code in a better way.

Smaller pieces of code are having simpler logic, helping us with easier code analysis (that is provided by current IDEs) - for example easier code duplication detection. We don't need to do the same change in multiple places, which brings fewer bugs. It allows us to use more abstract, reusable patterns.

Shorter pieces of code help us, for example, to avoid conflicts during merging, that can be caused by parallel work on a complex method changed by multiple developers at the same time.

Code splitting helps us with a clearer separation of concerns and Single Responsibility Principle (SRP) application. 

Shorter pieces of code are typically easier to test, as they have fewer code paths and are less complex.

Shorter functions/methods usually have a lower number of parameters, making testing easier. However, if a function/method has too many parameters, it may be a sign that the method is trying to do too much and should be broken up into smaller methods.


Code splitting

Code splitting can follow the rule of ten, which suggests that methods or functions should ideally have no more than ten lines of code. Additionally, one class should not have more than ten functions/methods, and one package should have no more than ten classes. While this is not a hard and fast rule, it promotes more readable and maintainable code as a general guideline.

Another approach is to consider the size of the IDE window - more exactly, the part where you are editing your code. A function/method should fit within the visible part of the window, allowing you to see its entire definition without scrolling.


Grouping chunks of code

Grouping chunks of code into sets that are more focused on one specific concern can help fulfill the Single Responsibility Principle (SRP).  

Proper naming conventions, such as using names containing well-known design patterns (to don't have the codebase just full of various "managers"), can make it easier to understand what the code is doing without the need for in-depth investigation.

These sets (files, classes) of functions/methods can contain more concrete or more abstract functionality and structures. These groups e.g. of classes can be grouped into modules. Separating different levels of abstraction into separate, independently deployable modules can help us to build a more maintainable application, where changes are isolated to specific parts or modules, making testing easier and more focused. We don't need to be so afraid that we broke something in a different part of the application. This makes testing easier and more focused since you don't need to test completely everything.

In object-oriented programming (OOP), more stable and abstract modules can use inheritance - since they should cover code business concepts, that are not going to be changed so often.

While more volatile, concrete modules should rather use object composition to allow for more flexibility in adjusting functionality to business needs.

Generally said - write as less code as possible, but at the same time, maximize positive impact on business in the long-term by prioritizing good architecture to facilitate easier maintenance and extensions.