Back

Frequently Asked Questions

What it does, what it costs, and whether it actually works.

How is this different from just asking ChatGPT to rewrite my resume?

ChatGPT gives you one model' best attempt with no mechanism to catch its own errors. This runs a structured pipeline where generation, critique, and authenticity verification are handled by three separate models with no shared context between them.

After the first model rewrites your CV against the job description, a second model reviews that output as a hiring manager. It uses 10 named failure categories - unsupported claim, seniority mismatch, generic language, weak opening, low evidence density, irrelevant argument, weak closing, overuse of job description language, low specificity, and missing required signal. It does not provide encouragement. It finds problems.

If anything fails, a third model runs a 7-dimension authenticity check: language naturalness, cliché density, evidence anchoring, human readability, structural variety, CV alignment, and specificity. Each dimension scores 0 to 1. Anything below 0.8 fails. If it fails, the content is revised against the structured failure objects and rechecked - not re-prompted loosely.

ChatGPT does not do any of this. It produces. It does not audit its own output through a separate system that cannot see the original generation context.

I'm worried the output will sound like AI wrote it. How is that handled?

That concern is the reason the authenticity check exists as a dedicated pipeline stage, not a prompt instruction.

A third model - operating separately from the one that generated your content - scores the output across 7 specific dimensions. Language authenticity checks for formulaic sentence structure and repetitive openings. Cliché elimination checks buzzword density. Human reading test checks whether the rhythm and phrasing reads as produced by a person. Structural naturalness checks for mechanical paragraph uniformity. All 7 must score 0.8 or above.

The model doing this check did not write the content. It cannot be compromised by the original generation context. When dimensions fail, the specific violations and suggested fixes are passed to a revision step as typed objects. The revision addresses each failure explicitly before the output reaches you.

There is also a banned-phrase list enforced at generation: "results-driven," "team player," "proven track record," "passionate about," "leverage my expertise," and several dozen others are excluded at the source, not caught after the fact.

What do I actually receive?

Rewritten CV - every work experience entry rewritten against the target job description. Every claim anchored to what you originally documented. A separate verification step traces each bullet and theme back to its source section in your CV. If a claim cannot be supported, it is not included.

Full change log - every edit documented with the specific reason it was made. What changed, what was added, and why. You can review each modification and revert anything you disagree with.

ATS compatibility score - your rewritten CV scored across five dimensions: keyword coverage, competency alignment, experience relevance, quantification, and structural compliance. Missing keywords, gaps, and formatting issues identified with specific fixes before you submit.

Three cover letter styles - a direct professional pitch; a problem-framing version that opens with the company' stated challenge; a career narrative tracing your progression toward the role. Each is structurally distinct, each passes the 7-dimension authenticity checklist.

Interview prep Q&A - anticipated questions based on the role and your documented background, with suggested response framings.

All outputs are returned in structured form and stored against your account for reference.

Does the AI see my name, email, or phone number?

No. Before any model call, a dedicated sanitization step removes names, email addresses, phone numbers, URLs, and LinkedIn references from your CV text. The models receive a document with no identity attached. This is a technical control: there is nothing to transmit because the data is removed before transmission occurs.

Your real identifiers are reinserted locally in your browser after the AI response returns. A PII map is maintained client-side for this purpose. The models and the server never hold the mapping between redacted placeholders and your actual information simultaneously.

Full data handling policy →

Is my CV stored anywhere?

The uploaded file and the text extracted from it are held in memory during processing only. When the request completes, they are discarded. No file storage. No database entry. No reconstruction possible.

What is stored is the generated output - the rewritten CV and cover letters - associated with your account. That stored record does not include your original CV, the job description, your personal identifiers, or the intermediate AI inputs. It is the finished document only.

What this service cannot do with your data, by design →

Why three AI models instead of one?

A model optimized for generation is not well-suited to detecting its own failures. The separation is structural, not cosmetic.

The first model receives your anonymized CV and the job description. It produces the initial rewrite - mapping experience to requirements, identifying transferable skills, rewriting bullets with specific claims.

The second model receives only the output of the first - not the original CV, not the job description, not the generation instructions. It operates as a hiring manager reviewing a document cold. It cannot be influenced by the reasoning that produced the content. It checks against 10 named failure categories and returns machine-readable findings with severity levels.

The third model receives only the output that passed critique. It checks for AI-generated patterns across 7 scored dimensions. It knows nothing about the generation or the critique - only the document in front of it.

If critique or authenticity fails, a revision step runs against the specific typed failure objects. The maximum number of revision loops is bounded. If the pipeline does not fully pass within that limit, partial outputs and structured failure reasons are returned - not silent failure and not an unchecked result.

Full technical architecture →

How do I know the rewritten bullets are based on what I actually did - not invented?

A dedicated step traces every claim in the output back to its source in your original CV. For each bullet and theme, it identifies: the exact source CV section, the original text, how it was transformed, and whether it is supported. Unsupported claims are flagged separately.

Evidence rules are enforced throughout. Metrics that do not appear in or cannot be directly derived from your CV are not used. "~50 users" cannot become "thousands of users." If no metric exists, scope is described specifically without fabricating a number. Seniority framing cannot exceed what your documented roles and dates support. If the fit between your background and the job is partial, the output reflects that narrowly rather than overstating it.

Is there a subscription?

No. Pay per optimization.

  • Single: €4.99 - 1 CV revision + 3 cover letters
  • 5 Pack: €17.99 (€3.60 each) - 5 CV revisions + 15 cover letters
  • 20 Pack: €49.99 (€2.50 each) - 20 CV revisions + 60 cover letters

No recurring billing. No trial that auto-renews. No card stored until you choose to proceed. Each optimization is role-specific - run against the job description you paste. The 5 and 20-pack tiers are structured for active job seekers applying across multiple positions.

Full pricing breakdown →

What file types do you accept, and how long does it take?

PDF only, including scanned and image-based PDFs. For image-based files, OCR runs server-side to extract text before the pipeline begins. Only the extracted text - after PII stripping - proceeds to the models. Raw image data is not transmitted.

The full three-model pipeline, including critique and authenticity passes, runs in under 30 seconds.

Will my data be used to train AI models?

No. API calls to the AI providers are made under commercial terms that prohibit use of API request data for model training by default. The content you submit for processing is not used to improve any AI model.

Additionally, the content sent to the models has already had your personal identifiers removed. Even under the provider' default data handling, the models do not receive information that could identify you.

What happens if my CV fails the authenticity or critique checks?

The pipeline handles this internally. If critique or authenticity findings exceed the pass threshold, a revision step runs automatically against the specific typed failure objects - not a vague re-prompt. The revision addresses each named issue before rechecking.

The number of revision loops is bounded. If the output does not fully pass within that limit, you receive whatever was produced along with the structured failure reasons - which failures occurred, in which dimensions, and what the specific violations were. You are never returned an unchecked result without being told it failed to pass.

In practice, the majority of outputs pass within the first or second revision loop.

You have the answers. The tool is ready.

Three-model pipeline. PII stripped before any AI call. Full pipeline under 30 seconds. Pay per optimization - no subscription, no auto-renewal.