Kuliberda Labs
kuliberda.ai

The Consultation: Why It Matters

We do not start with a quote. We start with a conversation.

Sending a price before understanding your workflow is like a doctor writing a prescription before the examination. It might land on the right answer by accident, but you would not bet your health on it. And you should not bet your business on it either.

The consultation exists because AI projects fail when they start from assumptions. Someone reads about what AI can do, imagines it applied to their business, and asks for a quote. The vendor, eager to close the deal, sends a number. Both sides are guessing. Three months later, the system does not match reality, and everyone is frustrated.

We skip that entire failure mode by starting with Discovery.

There is another reason we start here. AI is not a universal solution. Some processes benefit enormously from AI. Others do not benefit at all. And a surprising number of processes that seem like perfect candidates turn out to have complications that make AI impractical — at least until certain prerequisites are met.

The only way to know which category your process falls into is to examine it properly. Not through a sales call, not through a questionnaire, not through a demo of what our tools can do. Through a focused, structured session where we look at your actual work and give you an honest answer.

That is what Discovery is.


What Happens During Discovery

Discovery is a structured process, not a casual chat. It has a defined format, defined goals, and a defined output. Here is exactly what happens.

Session 1: Mapping Your Current Situation (60 minutes)

This is where we build a real picture of how your business operates day-to-day. Not the org chart, not the mission statement — the actual work that actual people do with actual tools.

We cover five areas:

What do you do manually every week — task by task, not "a lot of things."

We need specifics. "We process client inquiries" is not enough. "Maria spends 3 hours every Monday morning reading emails from the contact form, categorizing them into sales leads, support requests, and spam, then forwarding each to the right person with a short summary" — that is what we need.

The reason for this level of detail is simple: AI is good at specific, repeatable tasks. It is bad at vague, undefined responsibilities. The more precisely you can describe the task, the more accurately we can tell you whether AI can handle it.

Do not worry about organizing this perfectly before the session. We will guide you through it. But the more concrete you can be, the more useful our assessment will be. A list of tasks with approximate time per task is gold. A vague feeling that "things are inefficient" gives us very little to work with.

Where do you lose time — specific minutes and hours, not "too much."

"We waste time on admin" tells us nothing. "Our sales team spends roughly 6 hours per week copying data from email conversations into the CRM, then another 2 hours writing follow-up emails based on that data" tells us everything.

We are looking for time sinks that have a pattern. If the same type of work happens every day or every week, and it follows a similar structure each time, that is a candidate for AI. If it is different every time and requires judgment that changes based on context — it might not be.

Here is a useful exercise before the session: pick one week and actually track your time on the process you want to improve. Write down every step, every decision, every handoff. You will be surprised at how much time goes into things you have been doing on autopilot. That time log becomes our map.

What tools do you use today.

We need to know your stack. Email provider, CRM, document storage, communication tools, spreadsheets, databases, project management — all of it. Not because we will integrate with everything, but because integration complexity is one of the biggest cost drivers in AI projects. Knowing your tools upfront lets us estimate accurately.

Some tools have excellent APIs that make integration straightforward. Others are locked down, poorly documented, or require workarounds. We would rather discover this in a 60-minute session than in week 3 of a build.

A common surprise: many popular business tools offer API access only on their higher pricing plans. If you are on a basic plan and the integration requires API access, that is an additional cost on your side that we need to account for in the estimate. Better to know this during Discovery than during build.

What have you tried before and why it did not work.

This question saves more time than any other. If you have already tried a Zapier automation and it broke because of edge cases in your data, we need to know that. If you hired a freelancer to build a chatbot and it could not handle questions outside its training data, we need to know that too.

Previous failures are not embarrassing — they are data. They tell us exactly where the hard parts are. And they prevent us from repeating the same mistakes.

If you have not tried anything before, that is fine too. But if you have — whether it was a tool, a freelancer, an internal experiment, or even a failed attempt to build something with ChatGPT — tell us about it. What worked, what did not, and what you learned.

What does success look like — in concrete, measurable terms.

"We want things to be more efficient" is not a success criterion. "We want the time spent on email classification to drop from 6 hours per week to under 1 hour, with accuracy above 90%" — that is a success criterion.

Defining success in measurable terms protects both sides. You know exactly what to expect. We know exactly what to build toward. And when the project is done, there is no argument about whether it worked — we check it against the agreed criteria.

If you are not sure how to define success in measurable terms, that is okay. We will help you during the session. But come in thinking about it. "Better" is not measurable. "Faster" is not measurable. "From 6 hours per week to 1 hour" is measurable. "From 80% accuracy to 95%" is measurable. "Response time from 48 hours to under 4 hours" is measurable.

Session 2 (Optional): Deep Dive Into a Selected Process

If Session 1 identifies a strong candidate for AI, we sometimes schedule a second session focused entirely on that process. Not every Discovery needs Session 2 — sometimes Session 1 gives us everything we need. But for complex processes with many edge cases, the second session is where the real understanding happens.

Analysis of real examples. We look at actual documents — real emails, real reports, real data. Not cleaned-up samples, not hypothetical scenarios. The messy version. We want to see the email with the badly formatted attachment, the invoice that is missing half the fields, the client request that does not fit any of your existing templates.

Why? Because that is what the AI will encounter in production. If we build the system based on your best examples, it will work perfectly on your best examples and fail on everything else. The messy, inconsistent, occasionally incomprehensible inputs are where the real work happens.

Identification of edge cases. Every process has a "normal path" and a collection of exceptions that make the normal path complicated. The client who replies in a different language. The invoice that is formatted differently from all the others. The request that does not fit any existing category. The email that is technically a complaint but reads like a compliment. The document that arrives as a photo of a printout of a scan of a PDF.

These edge cases are where simple solutions break and where good solutions prove their value. During Session 2, we map as many of them as possible. We ask you: "What is the weirdest thing you have had to handle in this process?" and "What makes your team say 'oh, this one is going to be tricky'?"

The edge cases we identify in Session 2 directly shape the specification. They determine how much the project will cost and how long it will take. A process that is 95% consistent with 5% edge cases is a different project than one that is 70% consistent with 30% edge cases.

Preliminary assessment. At the end of Session 2, we give you one of three answers:

  • AI makes sense — this process is a good candidate, and we can estimate scope, timeline, and cost.
  • AI does not make sense — this process has characteristics that make AI a poor fit right now. We explain why.
  • AI makes sense, but not yet — the process needs preparation first. Maybe the data needs to be structured, or the workflow needs to be standardized. We tell you exactly what to fix and when to come back.

What We Do not Do During Discovery

We do not pitch. We will not show you a demo of an AI system and say "imagine this for your business." Demos are designed to impress, and impressive demos lead to unrealistic expectations.

We do not oversimplify. If the answer is "it depends," we will explain what it depends on rather than giving you a confident-sounding wrong answer.

We do not commit to a solution during the session. Discovery is about understanding the problem. The solution comes in the specification phase, after we have had time to think through the architecture properly. Snap decisions made during a call are how bad systems get built.

We do not record the session without your permission. If we do record (for our reference notes), the recording is never shared outside the project team and is deleted after the assessment is written.

We do not rush to fit the conversation into a sales framework. If the session runs 10 minutes over because we are in the middle of mapping an important process, we finish properly. Quality of understanding matters more than keeping to a rigid schedule.


What You Get From the Consultation

After Discovery, you receive a written document. Not a slide deck, not a verbal summary — a written assessment that you can share with your team, revisit later, and use as a foundation for decision-making.

The assessment includes:

Which processes are suitable for AI, and which are not (and why). For each process we discussed, you get a clear verdict. Not "everything can be improved with AI" — that is salesmanship. Some processes are excellent candidates. Others are poor fits. And for each one, we explain the reasoning so you can evaluate our logic yourself.

A prioritized roadmap. What to start with, what to skip for now, and what to revisit later. We recommend starting with the process that offers the best combination of impact and feasibility — not the one that sounds most impressive. A process that saves 2 hours per week but is straightforward to automate is a better first project than one that saves 10 hours but has dozens of edge cases that would take months to handle.

A preliminary estimate. Scope, timeline, and cost range for the recommended first project. This is a range, not a fixed quote — the exact number comes after the Specification phase. But the range is honest. We do not lowball to get you started and surprise you later. If we estimate 3,000-5,000 PLN during Discovery, the final spec quote will be in that range. If something in the spec pushes it higher, we explain why before you commit.

A clear answer. One of three:

  • "Yes, this makes sense. Here is what we recommend as a first project."
  • "No, not now. Here is why, and here is what would need to change."
  • "Yes, but start with Y first. Here is the preparation you need to do before AI will work for you."

The assessment is yours. You can take it to another vendor, share it with your team, or put it in a drawer and revisit it in six months. There is no expiration date, no strings attached, and no follow-up sales pressure.

What the Assessment Does NOT Contain

No generic advice. "You should explore AI for your business" is something you can read in any blog post. Our assessment contains specific processes, specific estimates, and specific recommendations.

No technology recommendations for their own sake. We do not suggest tools because they are trendy. We suggest them because they solve the problem you described.

No inflated estimates of time saved or money saved. We estimate conservatively. If the email classification process saves "probably 4-6 hours per week," we write "4 hours" in the assessment, not "6+." Underdelivering on expectations destroys trust faster than anything else.

No promises about AI capabilities we cannot back up. If a task requires judgment that AI handles poorly today, we will say so. We will not claim the AI can do something just because it is theoretically possible. We claim it can do something when we have built similar systems and know the success rate.


Why the Consultation Is Free

This is the question we get most often, and the answer is straightforward.

If we discover AI is not the answer, we will tell you. No charge for honesty. We do not earn from consultations. We earn from good projects. And good projects only happen when both sides understand the problem before building the solution.

A bad consultation leads to a bad specification, which leads to a bad build, which leads to a failed project, which leads to a destroyed reputation. We have seen this happen to other vendors. We would rather invest 1-2 hours in an honest conversation than spend weeks building something that was doomed from the start.

Think of it as risk reduction for both sides. You invest 60 minutes of your time. We invest 60-120 minutes of ours. In exchange, both sides know whether this project makes sense before anyone commits money or significant time.

If Discovery reveals that your process is not ready for AI, you have lost one hour. Without Discovery, you might have lost weeks and thousands of PLN on a project that was never going to work.

The math is simple. Free Discovery is cheaper for everyone than expensive failures.

One more thing: some clients worry that free means low-quality. That we will rush through the session because we are not being paid. The opposite is true. We take Discovery seriously because our entire business depends on getting it right. A sloppy Discovery leads to a sloppy spec, which leads to a sloppy build, which leads to a client who tells their network that AI consulting is a waste of money. We cannot afford that. Neither can you.


What You Should Prepare

The quality of the consultation depends directly on the quality of information you bring. Here is what to have ready:

Real examples of the process you want to improve. Actual documents, actual emails, actual data. Messy is fine — in fact, messy is preferred. We need to see the real thing, not a sanitized version. If your emails have typos, attachments in random formats, and replies in broken English — show us that. Because that is what the AI will face.

Bring 5-10 examples. Not your best ones and not your worst ones — a representative sample. We need to understand the range of variation in your inputs.

A list of tools you currently use. Write it down before the session. Email provider, CRM, document storage, project management, spreadsheets — anything that touches the process you want to improve. Include the specific product names and, if you know them, the pricing plans (some features and API access are only available on higher tiers).

60 minutes of focused time. No multitasking. No checking email during the call. No "sorry, I need to take this" interruptions. We cover a lot of ground in 60 minutes, and every interruption costs us 5 minutes of context switching.

If you cannot block 60 uninterrupted minutes in your schedule right now, wait until you can. A distracted Discovery session produces shallow results, and shallow results lead to bad projects. We would rather postpone the session than do it poorly.

A rough idea of your budget range. You do not need to know exactly how much you want to spend, but it helps to have a ballpark. If your budget for the entire project is 2,000 PLN, we will focus on what is achievable within Tier 2. If you can invest 10,000+, we can explore system-level solutions. Knowing your range upfront means we will not waste time speccing a project that does not fit your budget.

Openness to honest assessment. We may say "this is not ready for AI yet." We may say "this process needs to be standardized before automation makes sense." We may say "you would get better ROI by hiring a part-time assistant than by building an AI system."

If you are looking for someone to validate a decision you have already made, we are not the right fit. If you are looking for someone to give you an honest assessment, even when the honest answer is not what you wanted to hear — that is what we do.

A Note on Confidentiality

Everything you share during Discovery — your processes, your data, your problems, your numbers — is confidential by default.

We do not discuss one client's business with another. We do not use your examples in our marketing without explicit written permission. We do not mention you as a client without your consent. And we do not store your data after the assessment is written and delivered.

If you want a formal NDA before the session, just ask. We will sign it before Discovery begins. Many clients do not feel they need one for the consultation phase, but it is always available.


After the Consultation: Next Steps

Discovery ends with one of three paths. Here is what happens on each one. There is no pressure to decide immediately — take the assessment, review it, discuss it with your team. We will follow up once with a brief email, and after that it is your move.

Path 1: "Yes, this makes sense."

We write a functional specification. This takes 3-5 working days and covers every detail of the project: what we will build, how it will work, what it will not do, how you will test it, and how much it will cost.

You review the spec. We discuss any questions or disagreements in writing. Nothing moves to build until both sides agree on the spec. If you want to share the spec with your technical advisor, your business partner, or your in-house developer — go ahead. We welcome extra eyes on the spec. The more people who agree on what we are building, the fewer surprises during the build.

The spec is the foundation of the entire project. It prevents scope creep, miscommunication, and the "that is not what I meant" conversations that derail projects. More about this in our process documentation.

The timeline from Discovery to Specification is typically 1-2 weeks. The timeline from Specification to live system depends on the project tier — anywhere from 2 weeks for a simple assistant to 16 weeks for a full system. We give you a specific estimate as part of the spec.

Path 2: "Not yet."

We tell you exactly what needs to change before AI becomes a viable solution. Maybe your data needs to be structured differently. Maybe your process has too many exceptions that need to be standardized first. Maybe you are handling 10 inquiries per week and a simple email template would solve 80% of the problem at 1% of the cost of an AI system.

We write this down, with specific action items and a suggested timeline for revisiting the question. Not vague advice — concrete steps. "Standardize your report template to include these 7 fields. Use it consistently for 3 months. Once you have 50+ reports in the new format, come back and we will automate the generation."

You are welcome to come back when the prerequisites are in place. There is no expiration date on a good assessment. Some clients come back in 3 months. Some come back in a year. Some never come back because they solved the problem a simpler way — and that is a perfectly good outcome too.

Path 3: "No."

Honest, direct. Some problems are not AI problems. Some processes are too unstructured, too unpredictable, or too dependent on human judgment to benefit from AI automation right now. Maybe ever.

When the answer is no, we explain why. Not "it is complicated" — the actual technical or practical reasons. "Your process requires reading social cues from phone conversations and making judgment calls that depend on 15 years of industry experience. AI cannot replicate that. What it can do is transcribe the calls and organize the notes — but the actual decision-making stays with your team."

You walk away understanding your own process better, even if AI is not the answer. Several clients have told us that Discovery was valuable even when the answer was no, because the exercise of mapping their workflows in detail revealed inefficiencies they could fix without any technology at all.

Zero upselling. Zero "but maybe if we try this other approach..." pressure. We would rather send you away informed than keep you engaged in a project that will not deliver value.


How to Book

Email dawid@kuliberda.ai or use the booking link on our contact page.

First response within 24 hours. We will confirm a session time and send you a brief pre-session questionnaire so we can make the most of our 60 minutes together.

No sales funnels, no "schedule a call with our team" runaround. You email or fill out a short form — we respond and schedule.

What to include in your email: a brief description of what you do and which process you are thinking about improving. Two or three sentences is enough. We do not need a formal brief — we will cover everything during the session. The email just helps us prepare so we can hit the ground running.

If you are not sure what to write, "I run [type of business], I think AI might help with [general area], and I would like to find out" is a perfectly good starting point.


Common Questions About the Consultation

"What if I do not know which process to focus on?" That is fine. Session 1 is designed to help you figure that out. Come with a general sense of where your time goes and we will identify the best candidates together.

"What if I realize halfway through that AI is not for me?" Then we have both saved a lot of time and money. The session ends, you owe nothing, and you leave with a clearer understanding of your processes — which is valuable regardless.

"Can I bring a colleague?" Yes. In fact, we encourage it when the colleague is the person who actually performs the process we are discussing. The owner knows the strategy; the team member knows the reality. Both perspectives matter.

"Is this really free? What is the catch?" No catch. The business model works because good Discovery sessions lead to good projects. We invest the time upfront because it makes the rest of the engagement better for everyone. If Discovery leads to a project, we both win. If it does not, we have spent 1-2 hours and avoided a bad project. Either outcome is better than the alternative.

"How is this different from what other AI companies offer?" Most AI vendors start with a demo of their product and work backwards to find a problem it solves. We start with your problem and work forwards to determine whether AI is the right solution. Sometimes it is. Sometimes a simpler tool, a process change, or a human hire is the better answer. We do not start with a demo of a pre-built product — we start with your problem and find the solution that actually fits. Sometimes that is our system, sometimes something simpler.

"What happens to the information I share?" Everything discussed during Discovery is confidential. We do not share your business processes, data, or documents with anyone outside the project team. If you would like a formal NDA before the session, we will sign one. See our legal page for details.

"I have already talked to another AI vendor. Can I get a second opinion?" Absolutely. In fact, that is a good idea. Bring the other vendor's proposal if you want us to review it. We will give you an honest comparison of their approach versus what we would recommend. We are not threatened by competition — we are confident in our assessment because it is based on understanding your specific situation, not selling a pre-built product.

"What if my process is too simple for AI?" Then we will tell you during Discovery and save you money. Not every repetitive task needs AI. Sometimes a spreadsheet formula, a simple automation tool, or a well-designed template solves the problem for a fraction of the cost. We will recommend the simplest solution that works, even if that means recommending something other than our services.