Loading your instance profile
Preparing adoption profiles and analytics…
Your instance's adoption patterns across six domains. Each user is scored and classified into an archetype based on their activity.
Archetype Distribution
How archetypes work
Each user is scored across 6 adoption domains based on their GitLab API activity over the last 90 days.
Users are classified into the first matching pattern (priority-ordered):
- Dormant Expert — 2+ domains above 50%, inactive 45+ days
- Builder — High Source Control + CI/CD, low Review + Security
- Platform Engineer — High Infrastructure + CI/CD
- Reviewer — High Code Review, some Collaboration
- Solo Contributor — High Source Control, everything else low
- Generalist — 3+ domains above 50%
- Emerging — Low-moderate across 3+ active domains
- Developing — No dominant pattern (catch-all)
“High” means above the 60th percentile of active users on your instance.
Domain Score Distribution
Box = IQR, line = median, diamond = mean, whiskers = 5th–95th percentile.
What each domain measures
- Source Control — push count, MR authorship, branch creation
- Code Review — reviews given, merge actions, cross-project reviews
- CI/CD — pipeline runs, source diversity, deploy stage activity
- Security — scan types configured (SAST, DAST, dependency, secret detection), scan frequency, passive project exposure
- Collaboration — cross-project spread, issue triage, comment volume
- Infrastructure — automation sources (schedule, API, trigger) across multiple projects
Scores are percentile-ranked: 70% means in the 70th percentile for that domain on your instance. Security and Infrastructure use threshold-based scoring instead.
Adoption Heatmap
Average domain score per archetype. Brighter = stronger adoption.
Archetype Relationships
Nodes sized by user count, connected by shared project activity.
Based on your team's activity patterns, these are the highest-impact enablement opportunities. Each gap represents team members who are strong in one domain but underutilizing another — share these with your team leads to focus training where it matters most.
How Gaps Are Identified
For each user, we compare their strongest and weakest adoption domains.
- Score Gap — difference between best and worst domain score
- Strong Domain — the user's most-adopted area
- Weak Domain — the area with the most room to grow
- Do This Next — specific enablement recommendation linked to GitLab docs
Gaps are aggregated to show which workshops would impact the most users. Learn more →
Top Enablement Gaps
Areas where focused training would have the highest impact, ranked by number of team members affected.
| Strong Domain | Weak Domain | Users Affected | Do This Next | Projected Health |
|---|
Enablement groups clustered by training need, plus project team profiles. Each group represents users who would benefit from the same workshop.
How Groups Are Formed
Users are clustered by the workshop that addresses their weakest domain.
- Workshop topic — derived from each user's weakest adoption domain
- Rank — groups ordered by number of affected users (biggest impact first)
- ⚡ Key users — high-influence users whose adoption cascades through the org
- Health impact — projected score improvement if the workshop is delivered
Project Teams
Your team members' adoption profiles. These scores reflect activity patterns and project configuration — they are adoption signals, not individual performance evaluations.
How Profiles Work
Each row represents one user's adoption fingerprint across six domains.
- Archetype — dominant adoption pattern (priority-ordered classification)
- Influence — review reach + project breadth + cross-group activity
- Gap — strongest → weakest domain (the enablement opportunity)
- Domain scores — percentile-ranked adoption in each area
- Radar — visual adoption shape at a glance
Click any row to expand the full profile with radar chart and recommendations.
| User | Archetype | Confidence | Influence
ⓘ
How influence is scored: • Review reach (40%): distinct users reviewed across projects • Project breadth (30%): distinct projects active in • Cross-group activity (30%): diversity of project namespaces Percentile-normalized. A high score means this user’s behavior change would ripple across the organization. |
Gap | SC | CR | CI | Sec | Col | Inf | Profile |
|---|
How your adoption is changing. Track your team's progress across domains and focus on areas where enablement efforts are paying off — or where attention is needed.
How Trends Work
Trends compare two analysis runs to show how adoption is changing.
- Improving — total score increased between runs
- Declining — total score decreased
- Stable — change within ±5%
- Archetype shifts — users whose classification changed between runs
Run manifold analyze --previous-dir to enable trend tracking. Learn more →
How manifold works — methodology, data sources, and limitations.
Collect → Profile → Identify → Group → Target
The Enablement Pipeline
manifold processes GitLab API data through five stages to produce actionable workshop cohorts.
Stage 1: Collect
Fetches user activity from the GitLab REST API using an admin personal access token with api scope. Data includes events, merge requests, issues, and pipeline jobs. Collection is rate-limited and supports incremental fetching.
Stage 2: Profile
Each user is scored across 6 adoption domains:
- Source Control — push count, MR authorship, branch creation
- Code Review — reviews given, merge actions, cross-project reviews
- CI/CD — pipeline runs, source diversity (push, schedule, API, trigger), and deploy stage activity
- Security — SAST, DAST, dependency scanning, and secret detection presence; scan frequency; passive exposure from contributing to scanner-enabled projects
- Collaboration — cross-project activity, issue triage, comments
- Infrastructure — automation pipeline sources (schedule, API, trigger), multi-project breadth, and source diversity
Source Control, Code Review, CI/CD, and Collaboration are percentile-normalized across all users — a score of 70% means the user is in the 70th percentile for that domain on this instance. Security and Infrastructure use threshold-based scoring (0.0–1.0 directly) based on scanner presence and automation patterns.
Stage 3: Identify
For each user, the gap between their strongest and weakest domain is calculated. The “Score Gap” represents the difference in adoption between domains — a large gap suggests focused enablement would help.
Stage 4: Group
Users are clustered by the workshop topic that addresses their weakest domain. Each group represents users who would benefit from the same enablement session.
Stage 5: Target
Users are ranked by influence — their potential to cascade adoption change through the organization. Influence is computed from review reach (40%), project breadth (30%), and cross-group activity (30%).
Health Score
A 0–100 composite of: domain coverage (30%), archetype diversity (20%), security adoption (20%), review culture (15%), CI/CD adoption (15%).
Archetypes
Users are classified into the first matching pattern (priority-ordered):
- Dormant Expert — 2+ domains ≥ 50%, inactive for 45+ days
- Builder — High Source Control + CI/CD, low Review + Security
- Platform Engineer — High Infrastructure + CI/CD
- Reviewer — High Code Review + some Collaboration
- Solo Contributor — High Source Control, low everything else
- Generalist — 3+ domains above 50%
- Emerging User — Low-moderate across 3+ domains
- Developing — No dominant pattern (catch-all)
“High” means a normalized score ≥ 0.60. For percentile-based domains (Source Control, Code Review, CI/CD, Collaboration), this is the 60th percentile of non-zero scores. For threshold-based domains (Security, Infrastructure), it is a direct 0.60 score. Confidence reflects how far above or below the thresholds the scores fall.
Limitations
- Scores measure activity volume and feature usage, not code quality or skill level
- Security scores reflect scanner presence and scan frequency, not individual security knowledge or vulnerability remediation
- Infrastructure scores reflect automation pipeline patterns, not CI template authorship
- All data stays on-premise — no external services or telemetry
Data Source
All data is collected from the GitLab REST API using an admin personal access token with api scope. No database access, SSH, or agent installation required.
Privacy
User IDs are shown by default. The privacy toggle reveals usernames when enabled by an admin. All data stays on-premise — manifold makes no external network calls and includes no telemetry.
Version
manifold v0.1.0. Dashboard generated . Live demo at manifold.dunn.dev.
What This Dashboard Does NOT Measure
- Code quality or review thoroughness
- Individual skill level or expertise
- Delivery speed or cycle time (that's Value Stream Analytics)
- Deployment reliability (that's DORA metrics)
- Feature checkbox adoption (that's DevOps Score)
manifold measures adoption patterns — how broadly and deeply your team uses GitLab's capabilities. Use it alongside VSA and DORA for a complete picture.
Quick Start
Try it locally with synthetic data:
# Generate synthetic demo data
manifold generate --users 500 --seed 42
# Analyze
manifold analyze --input-dir data --output-dir analysis
# Render dashboard
manifold render --input-dir analysis --output-dir public
# Serve locally
manifold serve --dir public --addr :8080
For a real GitLab instance:
# Collect data from your GitLab instance
manifold collect --url https://gitlab.example.com --token $TOKEN
# Analyze collected data
manifold analyze --input-dir data --output-dir analysis
# Render dashboard
manifold render --input-dir analysis --output-dir public
# Serve locally
manifold serve --dir public --addr :8080
Or visit manifold.dunn.dev for a live demo with synthetic data.