32 comments

  • sparselogic 4 hours ago

    This is fun to see. Some of my family are Division 10 contractors: their GCs love them because they spot design coordination and code issues early and keep the project from getting derailed. Bringing that to the entire project is a serious lifesaver.

    • aakashprasad91 4 hours ago

      Totally! Division 10 and specialty trades are often the first to see coordination issues show up in the field. We’re trying to bring that same early-warning benefit across the entire drawing set so errors never make it to construction. Would love to run a real project from your family’s world if they’re open to it!

  • knollimar an hour ago

    What kind of system to you have for parsing symbology?

    Do you check anything like cross discipline coordination (e.g. online searching specification data for parts on drawings like mechanical units and detecting mismatch with electrical spec), or it wholly within 1 trades code at a time?

    edit: there's info that answers this on the website. It seems limited to the common ones (e.g. elec vs arch), which makes sense.

    • aakashprasad91 25 minutes ago

      Symbol variation is a huge challenge across firms.

      Our approach mixes OCR, vector geometry, and learned embeddings so the model can recognize a symbol plus its surrounding annotations (e.g., “6-15R,” “DIM,” “GFCI”).

      When symbols differ by drafter, the system leans heavily on the textual/graph context so it still resolves meaning accurately. We’re actively expanding our electrical symbol library and would love sample sets from your workflow.

    • aakashprasad91 an hour ago

      We parse symbols using a mix of vector geometry, OCR, and learned detection for common architectural/MEP symbols. Cross-discipline checks are a big focus as we already flag mismatches between architectural, structural, and MEP sheets, and we’re expanding into deeper electrical/mechanical spec alignment next. Would love to hear which symbols matter most in your workflow so we can improve coverage.

      • knollimar an hour ago

        I do electrical so parsing lighting is often a big issue. (Subcontractor)

        One big issue Ive had is drafters use the same symbol for different things per person. One person's GFCi is another's switched receptacle. People use the specialty putlet symbol sometimes very precisely and others not. Often accompanied by an annotation (e.g. 6-15R).

        Dimmers being ambiguous is huge; avoiding dimming type mismatches is basically 80% the lutron value add.

  • knollimar an hour ago

    Maybe this is saying the quiet part out loud: how do you deal with bogus specs that designers end up not caring about since they're copy pasted? Is it just mission accomplished when you point out a potential difficulty?

    • aakashprasad91 an hour ago

      We see that a lot — specs that are clearly boilerplate or outdated relative to the drawings. Our goal isn’t to force a change, but to surface where the specs and drawings diverge so the designer can quickly decide what’s intentional vs what’s baggage. “Flag + context for fast human judgment” is the philosophy.

  • knollimar 2 hours ago

    Is the pay as you go model % based or project sized? I've had issues with conflicts of interest of being lean vs not. It's hard to sell on % based revenue.

    Also who is this targetted at? Subcontractors, GC, design?

    • aakashprasad91 an hour ago

      We price per-project based on size/complexity not % of construction cost, so there’s no conflict of interest around bigger budgets. Today our main users are architects/engineers and GC pre-con teams, but subs who catch coordination issues early also get a ton of value.

      • knollimar an hour ago

        At what stage do you run this on plans? like DD, some % CD? What's the intended target timeframe?

        I don't see how subs get much value unless they can use it on ~80% CD for bid phases

        • aakashprasad91 an hour ago

          Most teams run us late DD through CD anywhere the set is stable enough that coordination issues matter. Subs especially like running it pre-bid at ~80–100% CDs so they don’t inherit coordination risk. Earlier checks also help designers tighten the set before hand-offs, so value shows up at multiple stages. Eventually the goal is to be continuous QA tool including during construction by pulling in field data too and comparing to drawings and specs. Like drawings showed X size and field photos show Y size.

          • knollimar an hour ago

            Would love to run it and give feedback if it's cheap to do so; my company just finished a bunch of projects and would love to cross reference if it catches the issues that we found by hand (assuming it's inexpensive enough). I do high rise electrical work for a subcontractor.

            • aakashprasad91 an hour ago

              We’d love that — perfect use case. Send a recent set and we’ll run a discounted comparison so you can see what we catch vs. what surfaced during construction. If helpful, we can hop on a quick call to walk through results and collect feedback. Email me aakash@inspectmind.ai

  • Doerge 4 hours ago

    I love this!

    Stupid question: Would BIM solve these issues? I know northern Europe are somewhat advanced in that direction. What kind of digitalization pace do you see in the US?

    • knollimar 2 hours ago

      BIM just shuffles the problem around. There are firms that do "one source of truth" BIM models but the real issue is conflicts and workflow buy in.

      How do you get architect to agree with engineer with lighting designer with lighting contractor when they all have different non overlapping deadlines, work periods, knowledge and scope?

      edit: if you don't work in the industry, BIM helps for "these two things are in the same spot", but not much for code unless it's about clearance or some spatial based calculation

      • aakashprasad91 an hour ago

        100% agree the hardest problems are workflow and incentives, not file formats.

        Even with a perfect BIM model, late changes and discipline silos mean drawings still diverge and coordination issues sneak through.

        We’re trying to be the “safety net” that catches what falls through when teams are moving fast and not perfectly in sync.

    • aakashprasad91 4 hours ago

      BIM definitely helps, but most projects still rely heavily on 2D PDFs for coordination and permitting especially in the US. Even when BIM exists, drawings often lag behind the model and changes don’t stay perfectly synced. We see AI plan checking as a bridge that helps teams catch what falls through the cracks in today’s workflows. And BIM only catches certain issues not building codes etc.

  • testUser1228 3 hours ago

    The bathroom height example in your video is really interesting (checking the bathroom height above the toilet against building code), how does it know when to check drawings against code provisions and how does it know which code to look at?

    • aakashprasad91 2 hours ago

      We infer the applicable codes from the project metadata + the drawings themselves.

      The location + occupancy/use type tells us the governing code families (e.g., IBC/IRC, ADA, NFPA, local amendments), and then we parse the sheets for callouts, annotations, assemblies, and spec sections to map them to the relevant provisions.

      So the system knows when to check (e.g., plumbing fixture clearances) because of the objects it detects in the drawings, and it knows what code to check based on jurisdiction + building type + what’s being shown in that detail.

      The model still flags with human-review intent so designer judgment stays in the loop.

      • testUser1228 2 hours ago

        Gotcha, so the model is identifying elements on the sheets and determining when to run code checks? Is the model running thousands of code checks per drawing set? I would imagine there are lots of elements that could trigger that

        • aakashprasad91 2 hours ago

          Yep, the model identifies objects/conditions on sheets (fixtures, stairs, rated walls, landings, etc.) and triggers the relevant checks automatically. It can run thousands of checks per project, but we only surface high-confidence findings where the combination of geometry + annotations + code context points to a real risk. Humans stay in the loop to confirm what matters.

  • frogguy 2 hours ago

    Are you doing code checks for structural issues? If so, how do you deal with licensing on common code orgs, such as ASCE?

    • aakashprasad91 2 hours ago

      Great question. We currently focus primarily on coordination, dimension conflicts, missing details, and clear code-triggered checks that don’t require sealed structural judgment. For structural code references (e.g., ASCE-7), we infer applicable sections and surface potential issues for a licensed engineer to review. We don’t replace engineering judgment or sealed design accountability.

  • T1tt 5 hours ago

    "an AI “plan checker”" do you have some public benchmark for how many issues you can find?

    how does this work behind the scenes?

    • aakashprasad91 5 hours ago

      Great questions. We’re working on a more formal public benchmark and will share results as our dataset grows. Today, we typically catch coordination issues like conflicting dimensions, missing callouts, building code and clearance violations that humans often miss in large sheet sets. Behind the scenes it’s a multimodal workflow: OCR + geometry parsing + cross-sheet callout graph + constraint checks vs. code/spec requirements.

  • BoorishBears 5 hours ago

    Not shade, and it's a small thing, but why do you list your investors as social proof here?

    Isn't the target persona someone who'd be at best indifferent, and at worst distrustful, of a tech product that leads with how many people invested in it? Especially vs the explanation and actual testimonials you're pushing below the fold to show that?

    • aakashprasad91 5 hours ago

      Totally fair callout and appreciate the feedback. We’re already testing alternative hero layouts focused purely on real customer results and example issues caught. Our goal is to win trust by demonstrating usefulness/results, not who invested in us.

      • an_aparallel 42 minutes ago

        where would my firms documents end up (on whos servers) to do this checking? I dont know how any firm would just hand out their cd's just like that?

        Or is being that lax normal these days?

        Aside: this field is insanely frustrating, the chasm between clash detection and resolution is a right ball ache...between acc, revizto, and aconex clash detection (and the like)..the defacto standard is pretty much telling me x is touching y....great...can you group this crap intelligently to get my hi rise clashes per discipline from 2000 down to 10? Can you navigate me there in revit (yes switchback in revizto is great) but revizto itself could improve.

        • aakashprasad91 25 minutes ago

          Yes one of the biggest values of our system is reducing “noise.” Instead of surfacing 2,000 micro-clashes, we cluster findings into higher-order issues (e.g., “all conflicts caused by this duct run” or “all lighting mismatches tied to this dimming spec”). We’re not a BIM viewer yet, but we do map issues back to sheet locations, callouts, and detail references so teams can navigate directly to the real source of the problem.

        • aakashprasad91 25 minutes ago

          We store files securely on AWS with strict access controls, encryption in transit and at rest, and zero sharing outside the file owner’s account. Only our engineers can access a project for debugging and only if the customer explicitly allows it. We can also offer an enterprise option with private cloud/VPC deployment for firms that require even tighter controls. Users can delete all files permanently at any time.

        • shuangly 32 minutes ago

          Documents are stored on AWS with strict access controls, meaning they are only accessible to the file owner and, if necessary, our engineers for debugging purposes. After the check, users can delete the project and optionally permanently delete the files from our S3 buckets on AWS.