Site icon Shuru

Project reboot – A new default

Project reboot - a new default

Project reboot - a new default

At Shuru, we’ve always believed in pushing frontiers and staying ahead of the curve. We are about a 100 member strong organization, and our teams have been using AI tools for over an year and a half – for code generation, brainstorming, debugging, and more.

But like any growing company, our adoption has been uneven.

Some of our devs rely heavily on AI; others still hesitate. Some engineers experiment with Copilot, Claude, Cline, and Gemini while others barely touch them.

We want to change that. And we want to do it in public.

Welcome to Project Reboot – our internal effort to spark aligned AI usage across the org, learn fast, and lead by example.


Our objective: Why we’re doing this

We believe AI is a productivity multiplier. But its benefits wouldn’t be realized to the maximum extent, if only a fraction of the org uses it effectively. Our mission with Project Reboot is to:

This blog post is the first in a series of many, that will document our journey and help other organizations collect dos-and-don’ts from our experience.

Complexity: What makes it difficult

Trying out new tools on side projects is fun. Trying them out where they don’t interfere with work responsibilities? Even more fun.

But the reality for most developers is: they have strict responsibilities. They’re expected to deliver what’s been assigned (which is often what they’ve done before), predictably and reliably.

Trying out new tools in such setups can feel riskyWhat if the manager disagrees? What if the AI suggests an inefficient query? What if more time must be spent learning than delivering?

These risks get worse when management demands efficiency gains, but doesn’t support experimentation.

Not all orgs can afford to experiment freely, and that’s fair – from the outside, it’s easy to say “just try stuff”. On the inside however, it’s risk, timelines, and real consequences.

Whatever the reason, our data confirms the same thing: even with unrestricted access being made available, adoption varies wildly.

Getting one dev excited about a new tool is easy. Getting hundreds to adopt it and actually use it well? That’s where the complexity kicks in.

Thereafter comes the onslaught of Agents – One day you’re testing Copilot. The next day, Cline launches agent mode. You switch to Cline, and now Roo-Code has a debug mode. You’re finally getting the hang of IDE agents, and boom! Claude-Code shows up and makes you question if IDEs are even necessary.

This wave isn’t slowing down. Even in orgs where developers overcome internal resistance and start exploring, the tools they adopt might become obsolete in a matter of weeks. So how do we keep our teams in an acceptable radius of ‘the cutting edge’?

Prerequisite knowledge: What We Learned from our internal AI Usage Survey

To kick off Project Reboot, we conducted a detailed survey with our engineering team across the organization to understand how developers are engaging with AI tools*.

Key findings:

Code generation per modality
Visual graph showing breakdown of AI code contribution for two most used modalities.
  • On average, AI-generated code accounts for 25–50% of total developer output, particularly outside of IDE environments.
  • Only about 10% of developers fall into the top usage band, generating 50–75% of their code with AI.
  • By contrast, roughly 50% of developers are in the lowest usage band, with less than 25% of their code being AI-generated.
Purpose-of-use

AI is used most for writing test cases, then for code explanation, boilerplate generation, refactoring, and more.

Visual graph showing most common use-cases for AI tool usage.

Key observations:

Most used tools
  • GPT-based tools lead usage.
  • Claude and Gemini follow.
Visual graph showing most used AI tools.
Modalities
Visual graph showing breakdown of most preferred usage modalities.
  • Web UI based tools slightly edge out IDE tools, but both are important.
  • VS Code dominates the development stack.
Hindrances

Top blockers to adoption are Buggy AI output, manual workflow preference, and security concerns.

Visual graph showing most common blockers and hindrances.

(Read the full combined report here → https://github.com/shurutech/reboot/blob/main/analysis-report-for-internal-ai-survey.pdf)

*In our view, the ideal scenario is one where more than 50% of code is produced with AI assistance.


What’s Next: The plan

This post is just the beginning. What comes next:

We’ll keep sharing our wins, losses, challenges, and data-driven pivots as the project unfolds.

Building in Public, Learning Together

We’re excited to share what works – and what doesn’t – as we explore how AI can reshape engineering workflows. We’ll publish every milestone, share our metrics, and keep iterating in public.

If you’re solving similar challenges – whether in a 10-person startup, or a 1,000-person org – we’d love to collaborate!

Stay tuned with Shuru Labs for more from Project Reboot.

Author

Exit mobile version