Frequently Asked Questions
Answers to questions we receive often. If your question is not here, email dawid@kuliberda.ai.
What is an AI agent?
An AI agent is a program that receives a goal, decides what steps to take, executes those steps using available tools (search, file access, APIs), and returns a result without requiring a human to direct each individual action.
An assistant answers your question. An agent completes your task.
Most of what we build sits between the two: systems that handle a defined workflow end-to-end, with a human reviewing the output rather than supervising each step.
What is memory in an AI system?
Memory is how an AI system retains information across sessions.
Without memory, every conversation starts from zero. The system has no context about previous interactions, your preferences, or your history. It is like talking to someone with complete amnesia every time you meet.
With memory, the system can reference past conversations, learn your preferences over time, and build context incrementally. It remembers that you prefer short reports, that Client X always has special requirements, that the policy changed last month.
We implement two types:
- ›Session memory: Context within a single conversation. The system remembers what was said earlier in the current session.
- ›Persistent memory: Structured storage that survives across sessions. The system remembers things from last week, last month, or last year.
Persistent memory requires explicit design decisions about what is stored, how it is indexed, and who can access it. We cover this in detail during the specification phase. For a deeper look at how reasoning and memory work in AI systems, see Reasoning and Memory.
How does an AI assistant differ from ChatGPT?
ChatGPT is a general-purpose AI. It knows a lot about a lot of things, but nothing specific about your business, your clients, your documents, or your tone.
A custom AI assistant is built on your specific context: your products, your pricing, how you communicate, what your policies are, and what to do when edge cases arise.
Here is a practical example. You ask ChatGPT: "What is our return policy for enterprise clients?" ChatGPT does not know your return policy. It will give you a generic answer about return policies in general, or it will make one up that sounds plausible.
Your custom assistant, built on your actual policy documents, gives you: "Enterprise clients on the Plus plan have 60-day returns with full refund. Standard enterprise clients have 30-day returns. Both require the original order reference. Exceptions for defective products are handled under section 4.2 of the enterprise agreement."
ChatGPT gives you a reasonable answer. A custom assistant gives you the correct answer for your situation, in your voice, following your rules.
What does "working system" mean?
A system that handles the target workflow end-to-end on real inputs without requiring manual intervention at each step.
This means:
- ›It works on your actual documents, not demo data
- ›It handles the edge cases we identified in the spec
- ›It produces output that meets the acceptance criteria we agreed on before build began
- ›Your team can operate it without contacting us for routine use
It does not mean:
- ›Zero errors (we document known limitations)
- ›Perfect performance on inputs outside the spec boundary
- ›Self-maintaining or self-improving without human oversight
What do I need to prepare before we start?
Before Discovery:
- ›A clear description of the process you want to automate or improve — written down, not just "we will talk about it"
- ›Access to 5-10 real examples (actual emails, documents, reports — not cleaned-up versions). The messier and more representative, the better.
- ›A list of tools you currently use for this process (email client, CRM, storage, project management, etc.)
- ›60 minutes of uninterrupted time. No multitasking during the session.
Before Build:
- ›Signed specification
- ›Access credentials to the relevant systems (we will tell you exactly what we need)
- ›A designated point of contact who can test and give feedback within 48 hours
The most common mistake: showing up to Discovery with a vague idea instead of specific examples. "We want to automate email" is not enough. "Here are 50 emails from the last month, here is how we currently classify them, and here is where it breaks down" — that is what we need.
How long does a project take?
Depends on the tier:
- ›AI Guide (Tier 1): One session (90 minutes) plus written assessment within 48 hours. Total: under a week.
- ›AI Assistant (Tier 2): 2-4 weeks from signed spec.
- ›Process Automation (Tier 3): 4-8 weeks from signed spec.
- ›AI System (Tier 4): 8-16 weeks from signed spec.
These timelines include our build time, your testing time, and iteration cycles. They start when the spec is signed and initial access is provided.
If we are waiting on your side for more than 5 business days, the timeline shifts accordingly. Fast feedback keeps projects on track. Delayed feedback delays delivery.
What happens after the project is delivered?
You receive a handoff package with full documentation and a support period for bugs:
- ›Tier 2: 30 days of bug support
- ›Tier 3: 30 days of bug support
- ›Tier 4: 60 days of bug support
After the support period, the system is yours to operate. Your team has the documentation, the training, and the code.
For clients who want ongoing support beyond the initial period, we offer a Maintenance Retainer (2,000-4,000 PLN/month). This covers prompt updates, model upgrades, integration changes, performance monitoring, and priority support. Month-to-month, cancel anytime with 30 days notice. Details on the services page.
Can the AI make mistakes?
Yes. Every AI system makes mistakes. This is not a flaw in our implementation — it is a fundamental property of how language models work. LLMs hallucinate: they generate text that sounds correct but is factually wrong. Like humans having good days and bad days — except with AI, the quality depends on context and data quality rather than mood.
What matters is not whether the AI makes mistakes, but whether those mistakes are caught before they cause damage.
Our workflow is built around this reality: multi-step verification, fact-checking against source material, confidence scoring (the system flags when it is uncertain), and human review on every high-stakes output path. The client always gets premium quality — not because the AI is perfect, but because our process catches imperfections before they reach you.
We do not build systems where the AI has unreviewed authority over consequential actions: financial transactions, client-facing communications without approval, or data deletion. We document known failure modes in the handoff package and test against edge cases, not just the easy path.
For a detailed explanation of how hallucination works and our verification process, see Language Models: How They Work.
Do you sign NDAs?
Yes, with one limitation: we do not sign NDAs that prohibit us from describing the general category of work we performed (e.g. "AI assistant for internal knowledge management"). We protect your specific documents, data, and business logic. We cannot agree to hide the existence of the type of system we built.
Do you work with clients outside Poland?
Yes. We work in Polish and English. All documentation, communication, and deliverables can be in either language. Location is not a constraint — the entire process works remotely.
What if I am not technical?
That is the default. Most of our clients are not technical, and the process is designed for that.
We do not expect you to understand how the system works technically. We expect you to know what your workflow looks like today and what a successful outcome looks like. You are the domain expert — you know your business. We are the technical expert — we know how to build the system.
The specification phase translates your non-technical descriptions into technical requirements. If something requires technical knowledge from your side, we will tell you in advance and help you prepare.
During handoff, we train your team to operate the system. The documentation is written in plain language, not technical jargon. If your team can follow a step-by-step guide, they can maintain the system.
What is the smallest project you take on?
The AI Guide (Tier 1, 149 PLN). It is a structured consultation with a written output, not a build project.
For builds, the minimum is Tier 2 (AI Assistant, from 1,500 PLN).
Can I start small and expand later?
Yes. Many clients start with a single assistant (Tier 2), test it on real work, and expand into automation or a full system after they see how it performs.
This is actually our recommended path when you are new to AI in your business. Start small, prove the value, then scale. A successful Tier 2 project gives you confidence in the technology, a working relationship with us, and real data on what works.
We design Tier 2 builds with extension in mind when the client indicates that expansion is likely. We note this in the spec. That way, the assistant can become part of a larger pipeline without being rebuilt from scratch.
Do I need to understand programming?
No. You need to understand your workflow and your goals. We handle the technical side entirely.
What you do need:
- ›Access to your tools. We will tell you exactly what credentials or permissions we need, and we will walk you through providing them if needed.
- ›Availability for feedback. During the build phase, you will need to test the system on real work and tell us what is working and what is not. This takes a few hours per week, not full days.
- ›Willingness to answer questions. We will ask about your processes, your edge cases, and your preferences. Quick, specific questions — not technical ones.
What you do not need:
- ›Programming knowledge
- ›Understanding of AI models or APIs
- ›Technical vocabulary
- ›A "technical person" on your team (though it does not hurt)
We have built systems for solo entrepreneurs who describe their processes in plain language. If you can explain what you do and what you want to achieve, we can build it.
What happens when the AI model gets updated?
Your system improves.
We design all systems to be model-upgradeable. When Anthropic releases a better version of Claude or OpenAI releases an improved GPT, your assistant gets smarter without being rebuilt from scratch.
What a model upgrade looks like in practice: we swap the underlying model, run the test suite against your acceptance criteria, verify that everything works as expected (or better), and deploy the update. Your system prompts, knowledge base, and integrations stay the same. Only the "brain" underneath gets upgraded.
If you are on a maintenance retainer, model upgrades are included. We handle the upgrade, test it, and deploy it. You get a notification saying "your system now uses Claude Opus 5" (or whatever the next version is called) and you carry on with your day.
If you are not on a retainer, we can perform the upgrade as a one-time service. We will quote the cost upfront — it is typically a few hours of work.
Can I switch the AI model later?
In most cases, yes.
We architect systems to be as model-agnostic as possible. The knowledge base, configuration, integration layer, and business logic do not change when you switch models. Only the AI model underneath changes.
For standard builds (most Tier 2 and Tier 3 projects), switching from Claude to GPT or vice versa is straightforward. The system prompts may need adjustment — different models respond differently to the same instructions — but the core system stays intact.
For deep integrations that use model-specific features (certain Tier 4 projects), switching may require more work. We will tell you during specification if the project has model-specific dependencies, and we will explain the trade-offs of going model-agnostic versus model-specific.
The short answer: we never lock you into a single AI provider. If a better model comes along, you should be able to benefit from it.
How is my data protected?
Data protection is built into every project from day one, not bolted on at the end.
Encryption. All data is encrypted at rest and in transit. Standard TLS for data in transit, AES-256 for data at rest. No exceptions.
Credential management. API keys, passwords, and access tokens are stored in secure credential storage, never in source code. Your GitHub repository contains configuration files — but credentials are injected at runtime from secure storage, not hardcoded in the files.
Access control. We access only the systems and data needed for the project, with the minimum permissions required. When the project is done, our access is revoked.
GDPR compliance. All projects are designed for GDPR compliance from the start. See the GDPR question below for details.
Post-project data. We do not store your data after project completion unless you specifically request ongoing maintenance. Project files on our side are deleted within 30 days of handoff. Your data lives on your infrastructure, under your control.
Full details in our Privacy Policy.
What does "human in the loop" mean?
For every decision with real consequences — financial, customer-facing, data deletion, legal — a human reviews before the AI acts.
The AI prepares. It analyzes the data, drafts the response, generates the recommendation, classifies the document. Then it stops and waits for a human to approve, modify, or reject.
For routine, low-risk tasks (sorting emails into categories, extracting data from standard documents, generating internal summaries), the AI acts autonomously. These are tasks where a mistake is easily caught and easily corrected, with no significant consequences.
The boundary between "autonomous" and "human-reviewed" is defined during specification. You decide where the line is, based on your risk tolerance. We will recommend based on experience, but the final call is yours.
This is how we handle the reality that AI makes mistakes. Not by pretending it does not, but by designing the system so that mistakes on high-stakes paths are always caught by a human before they cause damage.
Can I see the code?
Yes. Every project is delivered as a GitHub repository with full access. You own the code.
What you get:
- ›Complete source code with full change history (every commit, every modification, from day one)
- ›All configuration files (system prompts, model settings, integration configurations)
- ›Documentation (README, architecture overview, operations manual)
- ›Test suite (the tests we used to verify the system works correctly)
You are never locked in. If you want to hire another developer to modify the system, you can. If you want to switch to a different provider for maintenance, you can. If you want to fork the project and build something new on top of it, you can.
The code is yours. Not licensed to you, not hosted on our infrastructure with restricted access — yours. Fully.
What about GDPR?
We design all systems for GDPR compliance from the start. This is not an afterthought or an add-on.
During specification: Personal data processing is documented explicitly. What data is collected, why, how it is processed, where it is stored, who has access, and how long it is retained. If the system processes personal data, the spec includes a data processing section.
Model training: We do not use your data for model training. Your documents, emails, and business data are used solely for the purpose of your project. They do not become part of anyone's training dataset.
Data retention: Policies are defined per project during specification. How long data is kept, when it is deleted, and how deletion is verified. Your customers' data stays under your control at all times.
Data processing agreement: For projects that involve processing personal data, we provide a standard DPA (Data Processing Agreement) as part of the project documentation.
Right to erasure: Systems that handle personal data include the ability to delete specific records on request, as required by GDPR Article 17.
Full details in our Privacy Policy.
How do you handle confidential business information?
All client information shared during Discovery, Specification, and Build is treated as confidential by default.
We do not share your business processes, documents, or data with other clients. We do not use your project as a case study without explicit written permission. We do not discuss the specifics of your business with anyone outside the project team.
For clients who require formal confidentiality agreements, we sign NDAs (see the NDA question above for the one limitation).
What if the project does not work?
We have a process designed to prevent this, but let us address it directly.
Before build starts: Discovery identifies whether AI is the right tool. Specification defines exactly what we are building. Both phases exist to prevent building something that will not work. If we have doubts about feasibility during Discovery, we say so.
During build: We test on real data, iterate based on your feedback, and measure against acceptance criteria. Problems are caught and fixed during build, not discovered after delivery.
If acceptance criteria are not met: We keep working until they are. The spec defines what "done" looks like, and we are both bound by it. We do not deliver a system that does not meet its own spec.
If we discover mid-build that the approach will not work: We tell you immediately. We do not hide problems or hope they will resolve themselves. We propose alternatives, adjust the spec, or in rare cases, recommend stopping the project. In that case, you are not charged for work that does not deliver value.
We have never delivered a project that did not meet its specification. The reason is simple: we do not start projects that we do not believe we can deliver. That is what Discovery is for.