Executive Summary
- Under U.S. GAAP, most early-stage AI and machine learning work is expensed as R&D, but some costs can be capitalized as internal-use software once capitalization criteria are met.
- Auditors focus on the capitalization start date and will require evidence of management authorization, funding commitment, and probable completion under ASC 350-40.
- The most common audit failure points are weak time tracking, unsupported cloud cost allocations, and capitalizing experimentation or maintenance.
- AI-specific risks like model explainability, data licensing, and rapid obsolescence increase scrutiny around useful life and impairment.
- The fastest way to reduce audit friction is a repeatable evidence package: approvals, stage gating, labor allocation, cloud tagging, and monthly capitalization memos tied to the general ledger.
If you need audit-ready accounting for AI development costs, Ridgeway Financial Services helps AI and SaaS teams build GAAP-compliant capitalization policies, documentation packages, and controls that auditors can rely on, so your close and audit process does not become a recurring fire drill.
Table of Contents
- GAAP Framework for AI Development Costs
- When Capitalization Starts and Stops
- Audit Documentation Checklist for Capitalized AI Costs
- How Auditors Test Capitalization vs. Expensing
- AI-Specific Audit Risk Areas
- Common Audit Red Flags
- A Practical Implementation Playbook
- Bottom Line
GAAP Framework for AI Development Costs
U.S. GAAP does not have a dedicated “AI standard.” AI development costs are evaluated using existing guidance, most commonly:
- ASC 350-40 (Internal-Use Software) for AI embedded in internal systems or delivered via SaaS
- ASC 730 (Research and Development) for research and experimentation where future benefit is not sufficiently certain
- ASC 985-20 (Software to be Sold/Leased/Marketed) for software sold or licensed as a product (included here for completeness)
For many AI SaaS companies, the practical question is not “is this AI?” It is “is this still research, or are we building production software that is probable to be completed and used as intended?”
Recent guidance has modernized internal-use software accounting to better fit iterative development. Even with modernization, the audit reality remains the same: capitalization is allowed only after defined criteria are met, and the company must demonstrate those criteria with evidence.
When Capitalization Starts and Stops
For audit purposes, treat capitalization as a controlled gate with a start date and stop date, supported by documentation.
Capitalization start date
Auditors will expect evidence that, as of a specific date:
- Management authorized the project and committed funding
- The work is beyond exploratory research and is probable to be completed for its intended use
Capitalization stop date
Capitalization typically stops when the software (including the AI model component) is substantially complete and ready for intended use. After that, ongoing costs are generally expensed as operations or maintenance.
Audit Documentation Checklist for Capitalized AI Costs
Auditors want a defensible story backed by documents. A strong evidence package usually includes the following.
1) Project authorization and funding commitment
- Project charter, approval memo, roadmap sign-off, or board minutes
- Budget authorization showing committed funding
- Identified product owner and planned timeline
Why it matters: auditors use this to validate that capitalization did not start while the work was still speculative.
2) Scope and “probable to complete” support
- Requirements and architecture documents
- Model evaluation plan and acceptance criteria (what “works” means)
- Milestone evidence showing feasibility moved from uncertain to probable
Why it matters: for AI, “probable completion” often hinges on whether core performance uncertainty is resolved.
3) Labor allocation and time tracking that ties to reality
This is the most common audit pressure point.
Minimum audit-defensible options:
- Time tracking by project code and activity type (new development vs. maintenance vs. research)
- Engineering system evidence (Jira/Linear tickets) mapped to capitalizable workstreams
- Monthly finance review and approval of allocations
Audit red flag: “magic percentages” (for example, capitalizing 70% of all engineering payroll without contemporaneous support).
4) Cloud cost allocation with objective tagging
AI projects create heavy cloud usage. Auditors will expect you to segregate:
- Development and training runs that are directly attributable to building the model/software (potentially capitalizable)
- Production hosting, ongoing inference, and normal operations (expense)
Best practices:
- Tag cloud resources by project, environment (dev/test/prod), and owner
- Maintain monthly cloud allocation reports tied to invoices
- Document the allocation method and keep it consistent
5) Vendor and third-party costs
- Invoices and statements of work
- Evidence the vendor work was development-stage and within the capitalization window
- Deliverables mapped to project milestones
6) Data licensing and rights documentation
Auditors frequently ask:
- Do you have the rights to use this data for the intended purpose?
- Is the license term-based or perpetual?
- Are there restrictions that could impair future benefit?
Keep:
- Executed license agreements
- Term summaries and renewal provisions
- Internal memo describing how the data supports the product and how long it remains useful
7) Capitalization memo and rollforward schedule
Create a monthly memo that includes:
- Project name and capitalization start date
- Costs by category (labor, cloud, vendors)
- Basis for capitalization (what criteria were met)
- Rollforward tying to the general ledger
- Evidence binder index (where support lives)
This “one pager per month per project” is what makes audits faster.
8) Useful life and amortization support
Auditors will ask why the amortization period is reasonable, especially given rapid AI change.
- Useful life memo with factors like expected model refresh cycle, product roadmap, and obsolescence risk
- Amortization method and start date
9) Impairment triggers and monitoring
You need a documented process to identify when capitalized AI assets may be impaired (for example, model abandoned, product pivot, regulation blocks usage). Auditors expect timely write-downs when needed.
How Auditors Test Capitalization vs. Expensing
Auditors generally evaluate AI capitalization through four lenses:
1) Timing test
- Was the capitalization start date justified?
- Did the project meet the threshold criteria at that date?
- Were costs before that date expensed?
2) Nature of cost test
Auditors sample capitalized transactions and ask:
- Is this direct development (eligible) or research/maintenance (ineligible)?
- Is it incremental to build (eligible) or normal operations (ineligible)?
3) Allocation methodology test
Auditors test whether:
- Labor allocations are traceable
- Cloud allocations are tagged and reproducible
- The method is consistently applied
4) Completeness and bias test
Auditors look for incentives to over-capitalize, such as:
- Capitalization spikes near year-end
- Unusual changes in capitalization rate
- Inconsistencies versus prior periods
They also evaluate internal controls around the capitalization process, especially approvals and review controls.
AI-Specific Audit Risk Areas
Model explainability and governance
If model behavior is hard to explain, auditors may view the project as higher uncertainty for longer, which pressures the “probable completion” judgment.
High cloud spend and failed training runs
AI training often involves many experiments. Companies should avoid capitalizing costs that are essentially trial-and-error research rather than production build.
Data licensing and compliance constraints
If future use depends on fragile licensing rights or regulatory acceptance, auditors may challenge useful life assumptions and impairment conclusions.
Rapid obsolescence
If the model is expected to be replaced quickly (new architecture, new vendor model, new regulation), useful life may be shorter than typical software.
Common Audit Red Flags
These issues often trigger audit adjustments:
- Capitalizing while the project is still exploratory research with no clear feasibility support
- Capitalizing maintenance activities like ongoing retraining or minor improvements after the system is already in use
- Unsupported labor allocations (flat percentages with no time tracking)
- Cloud allocations without tags, environments, or direct linkage to training/development activity
- No impairment process despite abandoned models, pivots, or major external changes
- No written capitalization policy, or inconsistent application across projects
A Practical Implementation Playbook
If you want this to hold up under audit, implement a repeatable process.
Step 1: Establish a written policy and a gating workflow
- Define “research vs development” for your business
- Define the capitalization threshold and required approvals
- Define eligible and ineligible cost categories
- Define how you treat cloud training costs
Step 2: Implement operational tracking
- Time allocation or engineering ticket mapping to capitalizable workstreams
- Cloud tagging standards by project and environment
- Vendor statement-of-work tracking
Step 3: Produce monthly capitalization memos
- One memo per capital project per month
- Rollforward tied to GL
- Support index for audit sampling
Step 4: Monitor amortization and impairment
- Useful life review at least annually
- Impairment trigger checklist at each close
This approach reduces audit risk, avoids surprises, and makes financial reporting more credible to investors.
Bottom Line
AI development cost accounting is a documentation and controls problem as much as a technical GAAP problem. Under ASC 350-40, capitalization can be appropriate once criteria are met, but auditors will require evidence of authorization, probable completion, and eligible cost classification. Under ASC 730, research and experimentation remains expensed. If you build a consistent audit trail with strong labor and cloud cost support, you can capitalize appropriately and defend it.
FAQs
Can AI and machine learning development costs be capitalized under U.S. GAAP?
Sometimes. Many AI SaaS projects fall under ASC 350-40 where certain development costs can be capitalized once criteria are met, while research costs under ASC 730 are expensed.
What documentation do auditors expect for capitalized AI development costs?
Project approvals, evidence of probable completion, labor time tracking, cloud cost allocation support, vendor invoices, a monthly capitalization memo, and useful life and impairment support.
Can cloud GPU training costs be capitalized?
Only when they are directly attributable to building the software/model during the capitalizable window, and the allocation is objective and reproducible.
What are the biggest audit red flags for AI cost capitalization?
Unsupported labor percentages, capitalizing experimentation or maintenance, weak cloud tagging, late-year capitalization spikes, and lack of impairment or governance documentation.
Reviewed by YR, CPA
Senior Financial Advisor