top of page

Two years as a VC backed startup

Author: Written by Michael Wendland and Ian Glass, co-founders of Intalayer



This is the story of how we founded and ran a VC backed startup and why we decided to close down.


The past two years as startup founders have been the most intense, exciting and emotional learning experience of our lives.


Recently, however, we made the difficult decision to close down Intalayer.

We want to thank our 3rd co-founder, Dain, our early employees, investors, customers and everyone who supported us.


Now, we can reflect on our experience with the clarity of hindsight, recognise what we achieved and crystallise our learnings. We hope that learning from our mistakes will increase our probability of success when we start another company.

This is quite a detailed write up but we believe in transparency and that lessons lie in the detail.


If you’re a former, current or aspiring founder, we hope our story will benefit your journey. Please contact us (Michael & Ian) if you’d like to ask any questions directly.




10 hard lessons we learned


💡 Of the countless lessons we learned, these 10 stand out as shaping our story


Lessons about founding teams

  1. Don't undervalue the energy a co-founder brings to a founding team. When facing months or years of challenges, setbacks and tiredness, founders cannot rely solely on intrinsic motivation. Co-founders must energise each other.

  2. Co-founder vulnerability is a strength. Like any great, long lasting relationship, co-founders have to be emotionally transparent and honest. No-one in your life will understand what you’re going through like your co-founders. The more you share, the more you can support each other.

Lessons about problem spaces

  1. The problem you see or hear is rarely the problem that needs to be solved. Look deeper and ask ‘why’ to uncover root causes. Once the problem is correctly identified and understood, it is much easier to design a valuable solution.

  2. Some painful problems are very difficult to solve. ‘Wicked’ problems can have multiple stakeholders, be tangled with other problems and have roots in cultural challenges or conflicting human behaviours. Aim to deeply understand the ecosystem you’re operating in and seek out ‘contained’ problems instead.

Lessons about solutions

  1. ‘Time to value’ is the most important metric startups should design for. As an unknown entity, your target customers will have low confidence in your ability to change their lives. Focus on delivering value to them as quickly and effortlessly as possible.

  2. Try to manually solve the problem before coding anything. If you can repeatedly solve a problem manually and prove target customers are willing to pay, you will: understand the problem deeply, reduce risk and build confidence in how to gradually automate and productise a solution.

Lessons about feasibility

  1. Be cautious if veering outside your team’s areas of expertise. Don’t fall victim to the Dunning–Kruger effect and overestimate your ability to succeed in an unfamiliar industry, competitive ecosystem, problem space or task. Engage experts quickly to reveal ‘unknown unknowns’, plan for the ‘known unknowns’ and accelerate work on the ‘known knowns’.

  2. Don’t chase the ‘asymptote of death’, a target you get closer and closer to, but can never reach. Learn quickly how complete or accurate a solution needs to be before a target customer will realise value. If feedback leads your solution to become increasingly complex, before it can be perceived as valuable, consider changing approach or whether the problem is even solvable with your resources.

Lessons about difficult times

  1. Turn previous mistakes into constraints and criteria to guide future decisions. If you compound the knowledge you gain, you’ll continually make better decisions. This will help you evolve or pivot with confidence and avoid making repeat mistakes. A startup is as much about growing yourself as it is the business.

  2. Don’t allow failures to make you feel like a failure. There will be challenges, setbacks and even long periods of grinding without significant wins. Defining yourself by your company’s success will increase your suffering in difficult times. 💡


  1. Continue to our full story below and read how our experience taught us these lessons.

Our Origin in the Antler Startup Program


In January 2020, we joined Antler Australia’s startup generator program. We had never met before. Now, that feels like a lifetime ago.


Antler is an early-stage venture capital firm that also supports new generations of entrepreneurs through a startup generator program. Imagine a combination of Dragon's Den, The Apprentice and Love Island.


80 individuals from a broad range of backgrounds were brought together for a 10 week program in Sydney.


Our cohort was made up of:

  • Technology Leaders: software engineers, data scientists and machine learning experts

  • Business Leaders: management consultants, product managers, marketers, sales and operations experts

  • Domain Leaders: experts in financial services, healthcare, energy, corporate HR as well as Ph.D scientists

Antler also carefully selected a cohort that represented a diversity of cultures, genders and life stages.


Despite this diversity, the three things we all had in common were the drive to; find compatible co-founders, start a technology company and pitch for pre-seed investment from Antler.


This felt like a once in a lifetime opportunity.



Forming our founding team


The 10 week Antler program was was defined by three main goals:

  1. Find co-founders and form a team

  2. Validate a problem and gather early evidence of a business model that could represent a big market opportunity

  3. Pitch as a team for pre-seed investment


Our program started on Monday 13th January 2020. The first two weeks were made up of founder skills training and daily Design Sprints with mini-pitch challenges. In week 1, Antler defined the daily challenge topics and structured the cohort into new cross-functional teams of 5-10 individuals every day. The goal was to help us meet and work with as many potential co-founders as possible, as quickly as possible.


In week 2, we defined our own challenges and picked our own teams. It may sound ruthless, but we needed to mentally eliminate members of the cohort so we could focus the following weeks on working with the most compatible potential co-founders.


One Antler partner likened selecting compatible co-founders to how the military select special forces units, tasked with performing the most complex, classified, and dangerous missions.

We needed to select co-founders with; complimentary skills and experience that overcome individual weaknesses, personality types that energised us and character traits that we could commit to working with for the next 5+ years. Ultimately, we needed to trust relative strangers with our dreams and success.


Weeks 3-8 played out like a reality TV show with; co-founder courtships, tension, jealousy as potential co-founders still 'dated' other teams, the excitement of teams forming and individuals reeling from team break ups.


Antler often reminded us that the pressure was on because in week 8 there was a pre-investment committee pitch. Those without a co-founding team had a very low probability of being invited to pitch for pre-seed investment and teams without a validated problem or compelling solution might also not be invited.


We (Dain, Ian and Michael) were one of the last cohort teams to form, one week before the pre-investment committee pitch.


We had all been through team break ups but came together because of our complimentary skills and our shared experience with a common problem in fast growing software companies. Equally as important, we had compatible personalities and we energised each other.




Becoming a fully remote, global team

Based on our strong team, validated problem and compelling idea (covered in detail below), Antler invited us to pitch to the Investment Committee for pre-seed funding. We'd present to Antler Australia partners, Antler Global partners, Antler VC fund Limited Partners as well as a selection of Partners from other local venture capital funds.


However, only a few days before we pitched to the Antler Investment Committee, Sydney began to enter lockdown as the threat of Covid-19 spread.


Fearing difficulties eventually returning to his home and family in New Zealand, Ian decided to catch a next day flight to Auckland.


We were now a fully remote, globally dispersed team, operating across time zones.


Receiving pre-seed funding


On Tuesday 24th March 2020 we pitched to the Antler Investment Committee.

After an anxious wait, we were ecstatic to hear that Antler decided to invest in us.

It felt like a huge win, even though it was the very start of our journey together.




2. The problem we set out to solve

In our previous Product Management and Software Engineering careers, we recognised that as software companies start to grow rapidly, communication and collaboration becomes increasingly inefficient between R&D teams (Product, Design & Engineering) and customer facing teams (Customer Support, Customer Success, Sales, Marketing). This can cripple a company’s ability to make fast, customer centric product decisions, while delivering the best customer experience.


Through extensive research, we identified that this problem was most acute in software companies pursuing ‘Product-led Growth’.


Product-Led Growth is a go-to-market strategy that relies on using a company’s product as the main vehicle to acquire, activate, and retain customers. Product-led Growth is the strategy that guided the success of companies like Zoom, Slack and Dropbox. What makes ‘product-led’ companies unique is that all teams leverage the product to hit their goals.



In product-led companies, efficient data sharing, transparency and communication between R&D and customer facing teams is essential to drive growth. Support, Success, Sales and Marketing teams should inform product decisions and receive product information that supports growth initiatives or helps improve the customer experience.

  • Essential data passed from customer facing teams to R&D: customer usability issues, software bugs, new feature requests, sales objections, product improvements to drive growth and market insight

  • Essential data passed from R&D to customer facing teams: product knowledge and benefits, use cases, product roadmap, product fixes and new feature releases

The big problem in product led-companies is that the excitement of rapid growth leads to scaling pain:

  • Customer growth accelerates

  • More product teams are spun up to work in parallel

  • Products become increasingly complex

  • All R&D and customer facing teams expand headcount

  • Teams work across different systems and tools

Soon, silos emerge and processes that were once effective fail. Essential data sharing, transparency and communication between R&D and customer facing teams breaks down.

Quantifying the impact of this problem:

  • 89% of B2B and B2C software companies say it is not easy to gather and organise customer feedback.

  • 63% of companies say there is no clear communication and collaboration between the R&D and customer facing teams. (Source: Pendo.io 2019)

Ultimately, R&D teams in these companies cannot make customer centric product decisions and customer facing teams cannot leverage the product.

  • The result is 80% of released product features are rarely or never used. (Source: Pendo.io 2019)

Our Vision


Our vision for Intalayer was to become ‘The operations platform for product-led companies’.

We wanted to build the operations layer to enable efficiency between R&D and customer facing teams in rapidly growing product-led companies.


Ideal Customer Profile


We defined our ideal customer profile as:

  • Software companies pursuing product-led growth

  • Between 50 and 200 employees

  • Formalised R&D and customer facing teams

  • Recently entered a period of significant growth driven by:

    • Organic surge in demand: including demand triggered by influences such as Covid-19

    • A recent capital raise: most likely Series A or Series B


  • Rapidly hiring across R&D and customer facing teams, especially for operations roles

Identifying Customer Support as our beachhead


Due to the nuances in communication and collaboration needs between R&D and different customer facing teams, we identified a ‘beachhead’ to guide our go-to-market.

Our initial focus was the inefficiency in communication and collaboration between Customer Support and R&D teams.


The reason for this focus was that, from our research, we identified the pains of scaling are most intense for Support teams. Inefficiencies threaten the primary KPIs of the Support team and leave them feeling unheard, undervalued and under-utilised.


As customer growth accelerates, the volume of customer feedback received by Support teams surges from dozens of ‘items’ a week, to hundreds, to thousands. Overwhelmed by volume, Support leaders are unable to easily collate the customer feedback trends they need to advocate for customers and inform product decisions.


They can spend days every fortnight sourcing customer data across disparate Support, CRM, Customer Success and analytics tools, in an attempt to build strong business cases for R&D teams to resolve issues and requests quickly, or to investigate root causes that will reduce support volumes.


In many cases Support leaders are simply trying to stop their teams from buckling under pressure and they do not have the time or resources to effectively advocate for customers to R&D teams. This leads to unresolved customer needs and Support leaders feeling they have ‘lost their seat at the table’.


We decided this specific problem would be our starting point.


Our ambition was that once we successfully solved the problems between Support and R&D, we would expand to sequentially enable efficient operations between R&D and Customer Success, Sales and Marketing teams.


6. Our initial value proposition and solution


Our initial value proposition and solution was based on the four core ‘jobs to be done’ that Support leaders have to complete when building business cases to advocate for customers.

  • Collate: Collect and collate all customer issues and requests into one place and group by similarity

  • Contextualise: Consolidate customer data from tools such as Salesforce, Gainsight and Jira to contextualise the feedback

  • Analyse: Assess if the feedback could impact business goals and prioritise the feedback that could have the greatest impact

  • Advocate: Present the prioritised feedback in a compelling way to R&D teams to inform product decisions






Framing hypotheses to test

Our solution and go-to-market strategy was based on a number of assumptions that we framed as hypotheses to test with our prototype and beta products.





7. Building a waitlist and launching our beta

We focused on engaging with Support leaders in companies fitting our ideal customer profile to test our hypotheses and acquire companies to join our private or public beta.



Through a combination of direct outreach, community engagement and content marketing, we arranged conversations with dozens of Support leaders from leading product-led companies around the world.


Through these conversations, we validated our priority ‘Desirability’ hypotheses.





We started to grow a waitlist of Support leaders excited by our proposition and the impact Intalayer could have.


From this waitlist we successfully onboarded 10 private beta customers from Australia and the USA.


Encountering challenges with acquisition and onboarding


Despite validating our priority ‘Desirability’ hypotheses, while working to acquire and onboard our first beta customers, we encountered common sales objections and worrying patterns in our sales cycle and onboarding process.


These objections and patterns invalidated most of our ‘Viability’ and ‘Feasibility’ hypotheses.



We believe Support leaders have decision making authority to install the Intalayer app onto Zendesk without approval from additional stakeholders - Invalidated


Security objections

The Intalayer Zendesk and Jira apps used an NLP algorithm to automatically analyse the content of Support tickets and Jira ‘issues’, then match and group them by similarity. Additionally, Intalayer integrated with Salesforce to contextualise and prioritise feedback based on customer data such as; annual contract value and strategic value.


While highly desirable, these features required Intalayer to store Personally Identifiable Information (PII). Perceived data privacy risks led to Support leaders needing to seek approval from Security and Legal teams.


Most fast growing product-led companies rightly have very robust security and data privacy policies. In order to acquire customers we had to meet their requirements. Initially, this led to immediate rejection by high potential customers.


To overcome this challenge, we improved our security policies and reinforced our cloud infrastructure. We also used Vanta to provide reports verifying our security measures for decision makers to review.


While this provided us with immediate credibility, potential customers were still required to inform their existing users that Intalayer would become a new data sub-processor.

Ultimately, security objections and hurdles in the sales cycle led to numerous back and forth communications and weeks of delays.


Most worryingly, we faced these challenges before Intalayer could deliver any value. We had very limited testimonials and social proof to build confidence that the benefits Intalayer would bring were worth the perceived risk and hassle of joining our beta.


We believe Support leaders will drive their Support agents to adopt the Intalayer Zendesk app in their workflow - Invalidated


Risk of workflow change

While Support leaders were undeniably excited by our proposition and the impact Intalayer could have, it was clear that their priority was still the daily productivity of their Support agents. As a result, Support agent workflows have often been finely optimised by Support leaders and Support Operations specialists.


This created an adoption and onboarding challenge as, in order to see value in Intalayer, Support leaders would first need to drive change in their agent workflows.

Similar to Security objections, we had limited testimonials and social proof to build confidence in the benefits that Intalayer would bring. At this point, as team productivity was their priority, Support leaders perceived no change as less risky than driving any workflow change.

While we successfully built confidence in the Support leaders that joined our beta, the in-person guidance needed to convert and onboard each beta customer highlighted a scalability risk for us.


We believe Support leaders will influence Jira admins to install the Intalayer app on Jira without needing buy-in from Product leaders - Invalidated


The majority of conversations where Support leaders expressed excitement in Intalayer ended with “I’d like to run this by our product leads to get their buy-in on the approach”.

We learned that Support leaders had often experienced genuine emotional distress from feeling unheard, undervalued and under-utilised. They had made repeat attempts to advocate for customers, however frequently failed to influence decisions.


While Intalayer’s proposition filled Support leaders with optimism, when faced with the reality of next steps to onboard to our beta, this optimism was quickly replaced with trepidation. Support leaders did not want to risk another failed attempt.


When Support leaders did remain optimistic, we supported them with an introduction to Intalayer for Product leaders. However, if Product leaders did not respond, delayed or deferred a meeting, Support leaders avoided persisting.


This indicated cultural challenges that would be difficult to overcome.

As with change to Support agent workflows, we successfully provided guidance by repeatedly following up and facilitating discussion between both Support and Product leaders, however this highlighted another scalability risk for us.


💡 Ironically, despite targeting product-led companies, selling Intalayer was requiring an Enterprise sales approach. The barriers of moving to a self service model put us at risk of landing in the ‘Startup Graveyard’. At a relatively low price point, the unit-economics of a long sales cycle were simply unviable.


We believe Product leaders will agree feedback items prioritised by Support leaders using Intalayer should be investigated further - Invalidated


A key objection raised by Product leaders was, while contextualised and prioritised feedback provided by Support leaders would be valuable, Product leaders need to consider feedback from all customer facing teams when making decisions about what feedback to investigate further.


This objection, while valid from a Product perspective, further decreased Support leader confidence that Intalayer will help them to effectively advocate for customers and inform product decisions.


We believe the majority of companies fitting our ideal customer profile use Zendesk, Jira and Salesforce - Invalidated


For Intalayer to deliver the value promised in our value proposition, customers needed to use Zendesk, Jira and Salesforce.


We learned that while these tools are widely used across companies fitting our ideal customer profile, there was a much lower probability that a company used all three.


Integration/tech stack compatibility became a future risk too. To deliver the value planned in our product roadmap we would need to integrate with additional tools such as Customer Success and analytics tools. These would need to be sophisticated integrations, beyond what a product like Zapier could enable. This would mean custom development of integrations which are complex to build and maintain.


We realised that with each new integration, the probability of identifying companies with a compatible tech stack would decrease. The nature of our product would handicap our ability to get to market and reduce our Serviceable Available Market.


To further discover the following topics -


9. Identifying a new target buyer and user - Product Operations

After 7 months of iteration and testing approaches to overcome the acquisition and onboarding challenges we faced, we decided to make fundamental strategic changes.

We were confident in our vision and ideal customer profile but we lost confidence in Support teams as our beachhead and our strategy to expand sequentially across Customer Success, Sales and Marketing teams.


During our Customer Development process, we learned about a promising role that first emerged in leading Silicon Valley product-led companies and quickly spread globally - ‘Product Operations’.


Product Operations is an operational function within a Product team that helps the Product team build better products. ‘Product Ops’ optimises the intersection of customer facing teams and R&D to help the Product team operate as effectively and efficiently as possible.

Alongside other activities, like managing Product team tools and experimentation, Product Ops is responsible for helping Product leaders make more reliable decisions by equipping them with data.

Product ops is about setting up a system in the product organization to get the right data — both quantitative and qualitative — from the right places into the process for creating better products” - Melissa Peri, Founder, Produx Labs

Our secondary research highlighted that customer feedback from Support, Success, Sales and Marketing teams was a priority source of data. With this learning we defined an initial hypothesis. If true, Product Operations could be a more empowered target buyer and user.

CategoryHypothesesDesirabilityWe believe Product Ops leaders would value the ability to automatically collate, contextualise and analyse customer feedback, so they can easily present data to help Product leaders


We quickly planned research surveys with ~50 Product Ops leaders and interviewed ~20.

We learned that Product Ops leaders are often highly analytical but rely on manual analysis across a range of tools, from Excel to Tableau, to collate, contextualise and analyse customer feedback. They were often searching for or ‘hacking’ ways to automate these jobs to be done.

This validated our initial hypothesis.


CategoryHypothesesResultDesirabilityWe believe Product Ops leaders would value the ability to automatically collate, contextualise and analyse customer feedback, so they can easily present data to help Product leadersValidated


Evolving our solution

Interviews with Product Op leaders built our confidence that we had identified a significant pain they would pay to solve and that our solution was compelling.


However, interviews also confirmed a key challenge faced with our previous beachhead; Product leaders need to consider feedback from all customer facing teams when making decisions about what feedback to investigate further. They need the full picture.

For Intalayer to deliver value to Product Ops leaders, our solution would need to collate feedback across all customer facing teams.


Product Ops leaders also expressed gaps with exisiting Products tools such as Productboard, Aha and Canny, particularly their inability to contextualise customer feedback with customer data from CRM and Customer Success tools.


We began designing evolutions to our solution to solve this gap in existing Product tools and collate feedback across all customer facing teams, while avoiding the integration/tech stack compatibility challenge we had encountered.


10. An evolving competitive space with high barriers to entry

Unfortunately during this period, these existing tools announced evolutions in their own products, propositions and roadmaps. They were evolving to meet the needs of Product Ops leaders and capitalise on the opportunity to improve how R&D and customer facing teams work together in product-led companies.


We viewed this as validation that we had identified the correct trend, pains and opportunity. While we were pleased we had arrived at the same conclusion as major players with greater access to the market, this also presented a major risk for us.


If these tools, commonly used by our ideal customer profile, evolved to satisfy the same jobs to be done, we may be perceived as directly competitive to them.


In further interviews with Product Ops leaders, we tested evolved solution prototypes and received concerning feedback:

  • “How is this different from where Productboard is heading?”

  • “Why would I switch from Productboard to Intalayer?”

  • “If Intalayer also did X, I would consider testing if it works better for us”

What previously appeared as a gap in the market, had evolved into a space with high competitive barriers to entry.


Pursuing feature parity with these existing tools would be a long road and while we would be trying to catch up, they would keep innovating - with far greater resources and existing market share.


11. Time to pivot

We acknowledged that for Intalayer to survive, we’d have to make a more significant strategic pivot than just our target buyer and user.


The overarching and most significant challenge we faced was that we hadn’t identified and focused down on a 'contained' problem to solve.


'Contained' means there are no or no significant up or downstream jobs to be done, processes or teams that impact the problem and that a solution depends on.

We realised our previously defined problem and solution covered numerous interconnected jobs to be done, processes and teams.


These factors meant we had been focused on a more 'wicked' problem that is much harder to solve. Wicked problems are defined by characteristics such as:

  • The problem involves many stakeholders with different opinions, values and priorities

  • The problem is impacted by culture and human behaviour

  • The problem’s roots are complex and tangled

  • The problem is interconnected with other problems

This was a significant risk for an early stage startup that should be initially focused on developing the best solution to a contained problem in a market niche.


Finding focus

We remained committed to our vision and ideal customer profile but decided to define and focus on a contained problem within the broader space we had been operating in.

Our starting point was to review our market research, the feedback we received and the knowledge we had gained about fast growing product-led companies.


We defined a range of problems and conducted root cause analyses until we arrived at four contained problems that avoided the characteristics of more wicked problems. Each contained problem represented a single job to be done.


A modular approach

We believed that each of these contained problems could be solved and deliver significant value to Product Ops leaders in product-led companies.


To focus, we decided to select one contained problem to solve first. Our ambition was to then sequentially solve the remaining three problems. We envisaged modular products that could:

  1. Deliver fast ‘time to value’

  2. Deliver value independently

  3. Generate revenue independently

  4. Over time be combined to increasingly deliver and capture more value

Prioritising contained problems

We re-interviewed our Product Ops contacts to discuss the contained problems we defined and to prioritise them against the following assessment criteria:

  • Can we very clearly describe the problem in granular detail?

  • Can the negative impact of the problem be quantified and measured in isolation?

  • How painful is each problem relative to each other?

  • How important is solving the problem relative to their full role and responsibilities?

  • Do we have evidence target buyers are willing to pay to have the problem solved?

  • Is the problem experienced consistently across companies? (Problems that aren't consistent might not have a scalable solution)

12. A new contained problem - qualitative feedback analysis

Through these interviews and our assessment, we discovered the most promising contained problem:


💡 Product teams in fast growing product-led companies do not have an easy way to analyse and find actionable insight in qualitative customer feedback


We learned that Product Ops leaders, as well as Product Managers and Design Researchers, spend days every month manually analysing qualitative feedback.

Particularly challenging are open-text responses received across sources including:

  • NPS & CSAT surveys

  • In-app surveys

A common example is open-text responses to NPS questions like:


The most common analysis approach is called 'Thematic Analysis'. The goal is to surface distinct themes in the data that can be turned into problem statements to help inform product decisions.


This analysis process is manual, time consuming and frustrating but is seen as a necessary step to identify problem statements.


The problem is intensifying in fast growing product-led companies that are adopting new feedback collection methods, increasing the volume of qualitative feedback that Product and Design Research teams need to analyse.


One leading product-led company received upwards of 10,000 qualitative survey responses every month. Responses had to be analysed manually, which required ~10 days of Senior Researcher time every month.


13. Our new solution

Design Sprints with strict constraints

Through an intensive Design Sprint process, we ideated and designed multiple potential solutions. We applied strict constraints to ideation based on previous customer acquisition challenges.

  1. Solution must not use Personally Identifiable Information (PII)

  2. It must be possible for only one individual to discover, test and realise value

  3. Solution must not require any workflow change before it can be tested

  4. Solution must not have to integrate with other tools before value is delivered

  5. Solution must demonstrate value in less than 30 minutes from when the solution is discovered

  6. Solution must provide measurable value within a users first session.

  7. Solution must be compatible with or compliment existing tools such as Productboard, Aha and Canny

Potential solutions were tested with 15 target customers from product-led companies.

We identified the solution with the greatest demand. All interviewees asked to test our beta product.


The Intalayer Google Sheets app

Our proposition was focused on ‘time to value’. With a Google Sheets app, Intalayer could analyse any CSV export from well known survey tools.


(Updated Intalayer landing page highlighting our value proposition)

Google Sheets is a readily available tool and also an accessible tool of choice for many when performing Thematic Analysis. By focusing our solution around the Google Sheets platform, we were able to avoid integration and multiple stakeholder buy-in issues encountered previously.


Our novel approach of analysing customer feedback without any training of a text classification model would enable us to automatically identify high level and low level themes in qualitative feedback. This could reduce time to value to under 10 minutes.


(New demo video of the Intalayer Google Sheets app)

We positioned Intalayer as a lightweight, lower cost, faster time to value alternative to Qualtrics and Chattermill. These are enterprise focused products that have a broad feature set and require significant set up before performing accurately.


Beta waitlist building

To build further evidence of desirability and secure first users, we began building a beta waitlist through direct outreach, community engagement and content marketing.

Within one month we built a waitlist of 100 product-led companies that were eager to test our beta product.


14. Technical feasibility challenges faced delivering proposition

Despite the confidence we gained from building the beta waitlist, we began to encounter technical challenges throughout solution development and testing.

These feasibility challenges increased market risk and product risk, eroding our confidence in delivering our proposition.


Required accuracy of thematic analysis

The most significant market risk was that demand for a solution was based on the perceived accuracy of the automated Thematic Analysis.


Our target customers could manually conduct analysis, so they compared the accuracy of automated solutions to the linguistic competence of the human brain.

This meant they had very high expectations for the accuracy automated solutions should achieve vs the analysis they could conduct as a human familiar with their product and company. Having used existing automated tools and perceived them to be of low quality and accuracy, target customers were skeptical, had high expectations and low tolerance for inaccuracy.


The level of perceived pain correlates with feedback complexity

Thematic Analysis is complex when customer feedback is complex. Customer feedback is often more complex in larger companies as they aim to aggregate feedback from multiple sources and analyse it together. Larger companies also have a wider product feature set and range of 'subjects' customers can provide feedback on.


It may have been possible to narrow Intalayer's use case further beyond NPS & CSAT surveys and in-app surveys or to target smaller companies, so the customer feedback received was less complex. The risk there though was that if feedback is less complex, then target customers will perceive less pain in analysing it. This would decrease demand for a solution.

These factors led to the challenge of Intalayer needing to meet high expectations while solving a complex problem.


Technical challenges and solution evolution

Version 1:

Our initial solution was an interactive Word Cloud designed to help customers explore the most common words appearing in feedback at a high level. However, trial customers did not perceive this as valuable as common words may not be contextually relevant or nuanced to the 'subjects' related to their business. A ‘subject’ in the context of software development and survey feedback might be a specific feature or aspect of the product experience like "documentation". The Word Cloud also didn't meet expectations in providing supplementary information (i.e. sentiment) or quantitative data (i.e. count).

Version 2:

v2 was an interactive table of extracted subjects, showing count and sentiment. Trial customers could "dig deeper" into subjects by clicking on a subject then clicking on associated words, which would automatically filter feedback that contained the subject and associated words. v2 provided supplementary information and allowed the user to manually determine context and nuance by reading filtered feedback associated with subjects. This user experience was still largely manual and ultimately didn't sufficiently ease the burden of reading through every item of feedback to identify high level and low level themes.

Version 3:

Through more user research and previous product iterations, we noticed common patterns in how our target customers' end users were providing feedback in surveys. As a simple example, language patterns like "I wish [subject] could" indicated categories of information like a request around a subject.

As this was heading into the realm of Linguistics, we engaged an external consultant in Natural Language Processing (NLP). v3 used a form of grammar matching and subject extraction and was able to extract low volumes of 'useful' themes. However, as more extraction patterns were added, volume increased but accuracy and quality of themes decreased significantly. We quickly lost confidence in this solution's ability to quickly scale to an accurate and reliable product that could meet high expectations.

Version 4:

To meet target customer expectations, themes needed to be created by clustering high "importance" snippets of information based on "similarity". Since there were no market ready, off the shelf tools to help us piece together a scrappy solution, we engaged the NLP consultant again.

v4 was particularly challenging due to the subjective and context driven nature of "importance" and "similarity" and we quickly fell into what was described as “the most complex realm of Natural Language Processing”. Ultimately, we were unable to create such a complex solution to a high degree of accuracy.


Key overarching challenges that emerged

  • The MVP to enter the market and meet expectations was still very complex and required NLP experts and R&D effort that extended beyond our runway..

  • There was no ‘scrappy’ way to deliver value against target customer expectations.

  • The complex nature of NLP products, that require R&D effort to make significant improvements, decreased the chance of creating fast feedback loops for quick product iteration.

  • Given the technical complexity, there were likely 'unknown unknowns' and solution roadblocks that we couldn’t imagine.

15. A new go-to-market strategy - Rapid Reports

Given the complexity in producing an MVP that would meet expectations and allow us to enter the market, we decide to implement a 'Service as a Product' strategy in the form of a 'Rapid Report'. Customers would provide us with raw feedback data and within 24 hours we would deliver a complete report using a combination of our Google Sheets app and manual effort to shortcut the process.


Our goal was to deliver a recurring service where every fortnight or month, target customers would share their feedback data and we'd conduct the analysis.

We hoped the Rapid Report service would allow us to:

  • Prove that customers would pay to solve the pain of manual qualitative feedback analysis.

  • Develop an intimate understanding of the pain by performing analysis ourselves again and again.

  • Continue developing customer relationships.

  • Collect training data to improve model accuracy and incrementally improve the product to self-serve expectations.

  • Reduce product feedback cycles by making us our own customers.

Although we generated revenue by creating reports for three ‘unicorn’ product-led companies, the Rapid Report service quickly ran into challenges.


Key overarching challenges that emerged

  • Customers wanted a self-serve solution and only agreed to Rapid Reports as an interim value add. They weren’t willing to commit to recurring agreements.

  • Where companies had internal resources available (e.g. Product Ops team, Analytics team, Design Research team) they were unable to secure budget for an external party to conduct the analysis, as opposed to a self-serve tool that could augment their capability.

  • Larger companies also wanted to aggregate and analyse feedback from multiple sources e.g. reviews, surveys, call transcripts. There were concerns about sending potentially sensitive internal data to a 3rd party to analyse.

  • Companies that didn't have internal resources lacked the required volume of feedback to perceive a significant pain and justify outsourcing feedback analysis.

16. Co-founder departure

At the end of September 2021, after nearly two years of hard work, the challenges we faced left us with no clear path forward in the company and dwindling capital runway.

Because of this, Dain decided to leave Intalayer.


17. Assessing our position and direction

We totally understood Dain’s reasons and agreed we needed to take another step back to assess our position and direction.

At a high level, we had two options:

  1. Pivot inside our current problem: Explore new approaches to overcome feasibility challenges

  2. Pivot outside our current problem: Turn our attention to the other contained problems we defined or a problem we personally experienced while building Intalayer.

First, we needed to explore opportunities to pivot inside our current problem. We spent a week assessing the challenges and risks we were facing, as well as the alternate approaches we could take.


Conclusions on the qualitative feedback analysis problem space

  • We faced market risk in the form of skepticism, high expectations and low tolerance. We also faced significant product risk as a team without NLP expertise tackling "the most complex realm of Natural Language Processing." As a result, we lost confidence in our ability to deliver a valuable solution that meets target customer expectations.

  • We lost confidence that this could be a significant opportunity for Intalayer. Delivering value to SMEs at a relatively low price point would still require substantial R&D investment. Competitors targeting large companies with the greatest need and willingness to pay have years of R&D effort and market traction behind them and it was unlikely that we could eventually be "10x better".

  • Opportunity cost - there were likely better areas for us to devote effort with a higher chance of success.

Finding opportunity in new problem spaces

We decided to pivot outside the qualitative feedback analysis problem space. That decision gave us freedom to explore the other contained problems we defined and problems we personally experienced while building Intalayer.

Over two weeks we defined, assessed and prioritised over 30 problems. This was an energising process and we felt excited by the range of opportunities we identified. To learn from our most recent mistakes, we revised our assessment criteria and applied more strict technical feasibility constraints.

  • Can we clearly describe the problem, the expectation for a solution and level of tolerance for how 'complete' or accurate a solution needs to be?

  • Is it possible to go to market with a manual or low tech/no code service? This will reduce risk by deepening our understanding of the problem, minimising resource investment and providing immediate validation.

  • Can we create very fast feedback loops to accelerate our understanding of the problem?

Our assessment and prioritisation led to a shortlist of problems that energised us and represented significant market opportunities.

18. Deciding to close down

Despite our excitement to explore new problems together, we had to determine if the existing Intalayer entity would be the best vehicle for this exploration.


Investor relationships

Our biggest consideration was respecting our investor relationships. Our investor and advisor network is invaluable, so we wanted more than anything to protect our professional and personal relationships with them.

Some of our investors backed us, the team. Others backed the problem space. We knew that, in their perspective, we are a different shape team (now with only two founders) working on new problems. Because of this, we believed that it wouldn’t be fair to use investor cash until we could validate a path forward.

So, we decided to stop taking a salary from Intalayer. Additionally, we believed it may be best to return remaining cash to investors and reconnect with them in the future, when we have an investible opportunity.


Founder equity

Equity is a fundamental motivator for any founder and we were concerned that we’d be essentially starting again as a two founder team with heavy dilution.

We also feared continuing with the Intalayer entity as the dilution we’d already incurred may make Seed and Series A fundraising more challenging. Potential investors may believe further founder dilution could impact our level of motivation.


Personal circumstances

After nearly two years of long hours, limited rest and no or low income, our batteries were not what they used to be... and neither were our bank account balances!

We realised:

  • It takes founder energy to 'get a rocket into orbit'

  • We were worried we wouldn’t be able to give opportunities the energy they need without recharging first

  • Recharging is difficult without financial security

  • We need to take time to recharge and get some security

  • We’re in no rush. When we’re ready, we want to thoughtfully and diligently apply our learnings with clear minds and full energy

Based on these considerations, we decided that the best thing for us and our investors would be to close down the Intalayer entity and return remaining cash to our investors.


19. Did we fail?

This is a difficult and painful question, likely with no right or wrong answer.

But, here’s our answer:

💡 Intalayer might have failed, but we haven’t


We believe that failure is never pursuing a dream or giving up that dream.

We are proud of ourselves for taking a risk, trying, facing repeat challenges but always trying again and again.


Over two years we had long, painful periods with no wins that felt like a grind. To get through them, our mindset became:

💡 If we keep trying every day, we win everyday


Now, despite Intalayer ‘failing’, we have a lot of exciting opportunities and we still have the same ambition as when we started the Antler program.


So, we will recharge and try again. And that, in itself, is succeeding.




Article originally published here

bottom of page