2 comments

  • vamshieqvista12 5 hours ago

    Recruitlens.io looks useful for speeding up interview prep. especially creating consistent question sets for areas like leadership, culture fit, and estimation that teams often end up rewriting manually. That said, AI-generated questions aren’t “100% automatic”: quality depends heavily on the prompt and still needs human review to ensure role relevance, fairness, and to avoid generic or biased questions.

  • elminson 8 hours ago

    Hey HN, I'm a software engineer who does hiring. Not full-time recruiting — just the kind of technical interviewing that falls on your plate when your team is growing and no one else wants to prep. The part that always got me was the 30-45 minutes before each interview. You read the resume, cross-reference the job description, mentally map the candidate's background to what you actually need to know about them, then cobble together questions that aren't just "tell me about yourself." Multiply that by 4 interviews a week and it becomes a real time sink — especially when half the questions end up being recycled anyway. So I built RecruitLens in some weekends: paste a resume (PDF, raw text, or URL), and get 15 tailored interview questions in under 10 seconds. Questions are split into Technical, Behavioral, and Problem-Solving categories, and each one comes with an answer guide so you know what a good answer actually looks like. A few things that surprised me building this:

    Generic prompting is terrible for this. Early versions produced embarrassingly generic questions. The key was making the AI first identify the candidate's actual signal — specific skills, gaps, career transitions — and then generate questions that probe those exact things. The difference in quality was night and day. Interviewers don't want 50 questions. They want 15 good ones they can actually use in 45 minutes. Restraint in output mattered more than volume. Answer guides changed the most behavior. I expected the questions to be the main value. Turns out, knowing what "good" looks like for each question is what hiring managers actually cared about — especially for roles outside their core expertise.

    What it doesn't do (yet): it won't conduct the interview, score the candidate, or auto-reject anyone. I've been deliberate about keeping it as a prep assistant rather than a decision-making tool. That line feels important. We have few teams using it now. Free tier gets you 1 analysis/week. Pro is $9/month for 100 analyses. Would genuinely love feedback from anyone who does a lot of hiring — especially on the question categories. We have Technical / Behavioral / Problem-Solving right now, but I'm debating whether that's the right split or if it's too generic. A few things I'm curious about from HN:

    For those doing regular hiring: what's your actual prep process look like right now? Is there a category of interview questions (leadership, culture, estimation) you always end up adding manually? Any concerns about AI-generated questions you've thought through?

    https://recruitlens.io (still improving)