This morning Arist, a startup that sells software allowing other organizations to offer SMS-based training to staff, announced that it has extended its seed round to $3.9 million after adding $2 million to its prior raise.
TechCrunch has covered the company modestly before this seed-extension, noting that it was part of the CRV-backed Liftoff List, and reporting on some of its business details when it took part in a recent Y Combinator demo day.
Something that stood out in our notes on the company when it presented at the accelerator’s graduation event was its economics, with our piece noting that the startup “already [has] several big ticket clients and [says it] will soon be profitable.” Profitable is just not a word TechCrunch hears often when it comes to early-stage, high-growth companies.
So, when the company picked up more capital, we picked up the phone. TechCrunch spoke with the company’s founding team, including Maxine Anderson, the company’s current COO; Ryan Laverty, its president; and Michael Ioffe, its CEO, about its latest round.
According to the trio, Arist raised its initial $1.9 million around the time it left Y Combinator, a round that was led by Craft Ventures at a $15 million valuation. Following that early investment, the company’s business with large clients performed well, leading to it closing $2 million more last December. The founders said that the new funds were raised at a higher price-point than its previous seed tranche.
The second deal was led by Global Founders Capital.
The company’s enterprise adoption makes sense, as all large companies have regular training requirements for their workers; and as anyone who has worked for a megacorp knows, current training, while improved in recent years, is far from perfect. Arist is a bet that lots of corporate training — and the training that emanates from governments, nonprofits and the like — can be sliced into small pieces and ingested via text-message.
For that the company charges around $1,000 per month, minimum.
Arist did catch something of a COVID wave, with its founding team telling TechCrunch that pitching its service to large companies got easier after the pandemic hit. Many concerns better realized how busy their staff was when they moved to working from home, the trio explained, and with some folks suffering from limited internet connectivity, text-based training helped pick up slack.
We were also curious about how the startup onboards customers to the somewhat new text-based learning world; is there a steep learning curve to be managed? As it turns out, the startup helps new customers build their first course. And, in response to our question about the expense of that effort, the Arist crew said that they use freelancers for the task, keeping costs low.
Recently Arist has expanded its engineering staff, and plans to scale from around 11 people today to around 30 by the end of the year. And while Anderson, Laverty and Ioffe are based in Boston, they are hiring remotely. The startup serves global customers via a WhatsApp integration. So Arist should be able to scale its staff and customer base around the world effectively from birth. (This is the new normal, we reckon.)
What’s ahead? Arist wants to grow its revenues by 5x to 10x by the end of the year, hire, and might share if it wants to raise more capital around the end of the year.
Oh, and it partners with Twilio to some degree, though the group was coy on just what sort of discounts it may receive; the founding team merely noted that they liked the SMS giant and deferred further commentary.
All told, Arist is what we look for in an early-stage startup in terms of growth, vision and potential market scale — the startup thinks that 80% of training should be via SMS or Slack and Teams, the latter two of which are a hint about its product direction. But Arist feels a bit more mature financially than some of its peers, perhaps due to its price point. Regardless, we’ll check back in at the mid-point of the year and see how growth is ticking along at the company.
As the U.S. reportedly readies for retaliation against Russia for hacking into some of the government’s most sensitive federal networks, the U.S. is facing another old adversary in cyberspace: China.
Microsoft last week revealed a new hacking group it calls Hafnium, which operates in, and is backed by, China. Hafnium used four previously unreported vulnerabilities — or zero-days — to break into at least tens of thousands of organizations running vulnerable Microsoft Exchange email servers and steal email mailboxes and address books.
It’s not clear what Hafnium’s motives are. Some liken the activity to espionage — a nation-state gathering intelligence or industrial secrets from larger corporations and governments.
But what makes this particular hacking campaign so damaging is not only the ease with which the flaws can be exploited, but also how many — and how widespread — the victims are.
Security experts say the hackers automated their attacks by scanning the internet for vulnerable servers, hitting a broad range of targets and industries — law firms and policy think tanks, but also defense contractors and infectious disease researchers. Schools, religious institutions, and local governments are among the victims running vulnerable Exchange email servers and caught up by the Hafnium attacks.
While Microsoft has published patches, the U.S. federal cybersecurity advisory agency CISA said the patches only fix the vulnerabilities — and won’t close any backdoors left behind by the hackers.
CISA is aware of widespread domestic and international exploitation of Microsoft Exchange Server vulnerabilities and urges scanning Exchange Server logs with Microsoft's IOC detection tool to help determine compromise. https://t.co/khgCR2LAs0. #Cyber #Cybersecurity #InfoSec
— US-CERT (@USCERT_gov) March 6, 2021
There is little doubt that larger, well-resourced organizations have a better shot at investigating if their systems were compromised, allowing those victims to prevent further infections, like destructive malware or ransomware.
But that leaves the smaller, rural victims largely on their own to investigate if their networks were breached.
“The types of victims we have seen are quite diverse, many of whom outsource technical support to local IT providers whose expertise is in deploying and managing IT systems, not responding to cyber threats,” said Matthew Meltzer, a security analyst at Volexity, a cybersecurity firm that helped to identify Hafnium.
Without the budget for cybersecurity, victims can always assume they are compromised – but that doesn’t equate to knowing what to do next. Patching the flaws is just one part of the recovery effort. Cleaning up after the hackers will be the most challenging part for smaller businesses that may lack the cybersecurity expertise.
It’s also a race against the clock to prevent other malicious hackers from discovering or using the same vulnerabilities to spread ransomware or launch destructive attacks. Both Red Canary and Huntress said they believe hacking groups beyond Hafnium are exploiting the same vulnerabilities. ESET said at least ten groups were also exploiting the same server flaws.
Katie Nickels, director of intelligence at threat detection firm Red Canary, said there is “clearly widespread activity” exploiting these Exchange server vulnerabilities, but that the number of servers exploited further has been fewer.
“Cleaning up the initial web shells will be much easier for the average IT administrator than it would be to investigate follow-on activity,” said Nickels.
Microsoft has published guidance on what administrators can do, and CISA has both advice and a tool that helps to search server logs for evidence of a compromise. And in a rare statement, the White House’s National Security Council warned that patching alone “is not remediation,” and urged businesses to “take immediate measures.”
Patching and mitigation is not remediation if the servers have already been compromised. It is essential that any organization with a vulnerable server take immediate measures to determine if they were already targeted. https://t.co/HYKF2lA7sn
— National Security Council (@WHNSC) March 6, 2021
How that advice trickles down to smaller businesses will be watched carefully.
Cybersecurity expert Runa Sandvik said many victims, including the mom-and-pop shops, may not even know they are affected, and even if they realize they are, they’ll need step-by-step guidance on what to do next.
“Defending against a threat like this is one thing, but investigating a potential breach and evicting the actor is a larger challenge,” said Sandvik. “Companies have people who can install patches — that’s the first step — but figuring out if you’ve been breached requires time, tools, and logs.”
Security experts say Hafnium primarily targets U.S. businesses, but that the attacks are global. Europe’s banking authority is one of the largest organizations to confirm its Exchange email servers were compromised by the attack.
Norway’s national security authority said that it has “already seen exploitation of these vulnerabilities” in the country and that it would scan for vulnerable servers across Norway’s internet space to notify their owners. Slovenia’s cybersecurity response unit, known as SI-CERT, said in a tweet that it too had notified potential victims in its internet space.
Sandvik said the U.S. government and private sector could do more to better coordinate the response, given the broad reach into U.S. businesses. CISA proposed new powers in 2019 to allow the agency to subpoena internet providers to identify the owners of vulnerable and unpatched systems. The agency just received those new powers in the government’s annual defense bill in December.
“Someone needs to own it,” said Sandvik.
Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using SecureDrop.
In recent years, the tech and venture scene in Latin America has been growing at an accelerated pace. More global investors are backing startups in the region and certain sectors in particular, such as fintech, are exploding.
Global investors are not only pouring money into companies. They’re also investing in funds.
Today, Volpe Capital announced the $80 million first close of its fund targeting high growth technology investments in Latin America. Notably, Japanese investment conglomerate SoftBank, BTG and Banco Inter affiliates are anchor investors in the new fund, which is targeting aggregate commitments of $100 million with a hard cap of $150 million. Volpe also received a “large anchor investment” from its management team.
Andre Maciel, Gregory Reider and Milena Oliveira are the fund’s founding partners, and are based in Sao Paulo, Brazil. Notably, Maciel is the former managing partner at SoftBank’s $5 billion Latin America-focused innovation fund. He launched Volpe in 2019 primarily with SoftBank’s backing. Reider formerly invested at Warburg Pincus.
Maciel said the fund’s raise was “significantly oversubscribed with firm commitments” and believed to be “among the best capital raises for a first-time fund in its asset class in Latin America.”
Volpe Capital plans to invest in about 15 companies over a two and half year time span, according to Maciel, who expects its average check size to be around $5 billion.
So far, it’s backed Uol Edtech, a subsidiary of Grupo Uol that aims to redefine the digital learning experience in Brazil.
“We are in no rush,” Maciel told TechCrunch. “We are happy with our first deal and will take capital preservation in consideration. We believe markets are hot now and plan on taking advantage of the cycle by being patient.”
The fund’s strategy is to go after the companies that are not actively raising capital.
“We want to invest in companies that are not necessarily raising capital when we approach them,” Maciel said.
The fund views itself as agnostic regarding stage and primary versus secondary.
It is seeking to back early-stage companies with less than $50 million in valuation as well as some later stage, high growth companies. The fund’s first investment — Uol Edtech — falls in the latter category with EBITDA margins above 30%, according to Maciel.
Volpe plans to avoid capital intensive industries, even if related to tech.
“Those are more suitable to investors with deeper pockets than Volpe,” Maciel said.
Instead it’s eyeing edtech, healthtech, software and fintech investments (that are not credit-related).
“We like sectors that are prone for disruption in Latin America and that require local customization,” Maciel said. “Given the stage of the vc/growth industry in Latin America, we believe it is better to be a generalist.”
SoftBank International CEO Marcelo Claure describes Maciel as one of his “amazing founding partners for SoftBank in Latin America.”
“We are very happy to be one of Volpe’s anchor investors and look forward to continuing our relationship with them,” he added in a written statement.
Another anchor investor has a SoftBank tie. João Vitor Menin, CEO of Inter, a publicly traded fintech platform in Brazil with a market cap of over $7 billion, points out that Maciel led an investment in Inter’s platform through SoftBank. He also “made valuable contributions” as a board member, according to Menin.
Advances in technology provide all kinds of benefits, but also introduce risks — especially to already marginalized populations. AI for the People’s Mutale Nkonde, disability rights lawyer Haben Girma, and author of Algorithms of Oppression Safiya Umoja Noble have studied and documented these risks for years in their work. They joined us at TC Sessions: Justice 2021 to talk about the deep origins and repercussions of bias in tech, and where to start when it comes to fixing them.
When it comes to identifying bias in tech, there are two ways of coming at it: the tech itself and the people who are putting it to work. A facial recognition system may be be racist itself (such as working poorly with dark skin) or used in furtherance of racist policies (like stop and frisk).
Nkonde: There is the problem of technologies which are inherently racist, or sexist, or ableist, as Haben so beautifully pointed out. But there is another part… an imagination for technologies that could actually serve all people. And if the if the scientists who are creating those technologies don’t have experience outside of their own experiences, and we’re sitting in a moment where Google AI has got rid of [Margaret] Mitchell and Timnit Gebru, both of whom were technologists from, researchers from, minoritized communities who are thinking about new and different ways that tools could be designed… then you may not see them coming to products. I’d say that the two are definitely married. (Timestamp: 3:00)
Bias does not only exist in controversial tech like facial recognition. Search engines, algorithmic news feeds, and other things we tend to take for granted also can contain harmful biases or contribute to them.
Noble: My concerns were with what we might think of as just banal technologies, things that we really don’t give a second thought to, and that also present themselves as widely neutral, and valuable. Of course this is where I became interested in looking at Google search, because Google’s own kind of declaration that they were interested in organizing all the world’s knowledge, I think was a pretty big claim. I’m coming out of the field of Library and Information Science and thinking about, I don’t know, thousands of years of librarians, for example, around the world, who have been indeed organizing the world’s knowledge, and what it means to have an advertising company, quite frankly, data mine our knowledge, but also commingle it with things like disinformation, propaganda, patently false information and ideas, and really flatten our ability to understand knowledge and good information. (Timestamp: 5:13)
Haben Girma, who is deaf and blind, has advocated for accessibility with the skills she learned at Harvard Law. But the lack of accessibility goes deeper than simply not captioning images properly and other small tasks.
Girma: So most of the technology that’s built was not imagined for disabled people, which is frustrating… and also absolutely ridiculous. Tech has so much potential to exist in visual forms, in auditory forms, in tactile forms, and even smell and taste. It’s up to the designers to create tools that everyone can use. (Timestamp: 0:56)
A disturbing viral trend on TikTok recently questioned the story of deafblind icon Helen Keller. Doubt that she existed as described or did the things she did was widespread on the platform — and because TikTok is not designed for accessibility, others like Keller are excluded from the conversation and effectively erased from consideration in addition to being the subject of false claims.
Girma: Deafblind people have used technology for quite a while, and were early users of technology, including being designers and engineers. We are on many of the social media platforms, there are blind and deaf blind people on Twitter. TikTok was not designed with accessibility in mind.
When you have a space where there are few disabled people, ableism grows. People on TikTok have questioned the existence of Helen Keller, because the people on the platform can’t imagine how a deafblind person would write a book, or travel around the world. Things that are well documented that Helen Keller did. And there’s also lots of information on how blind and deaf blind people are doing these things today, writing books today, using technology today. So when you have these spaces where there are no disabled people, or very few disabled people, ableism and negative biases grow more rapidly. And that’s incredibly harmful, because the people there are missing out on talented, diverse voices. (Timestamp: 12:16)
The flip side of racism within tech is ordinary tech being used by racist institutions. When law enforcement employs “objective” technology like license plate readers or biometric checks, they bring their own systematic biases and troubling objectives.
Nkonde: One of the things that that really brought me to was this whole host of technologies that when used by security forces, or police, reinforce these discriminatory impacts on black communities. So that could be the way license plate readers were used by ICE to identify cars, and when they pulled people over, they would do these additional biometric checks, whether it was fingerprinting or iris readers, and then use that to criminalize these people onto the road to deportation. (Timestamp: 17:16)
And when the two forms of bias are combined, certain groups are put at serious disadvantage:
Nkonde: We’re seeing how all of these technologies on their own, are impacting black lives, but imagine when all of those technologies are together, imagine when, here in New York, I walked to the subway to take a train because I have to go to work. And my face is captured by a CCTV camera that could wrongly put me at the scene of a crime because it does not recognize my humanity, because black faces are not recognized by those systems. That’s a very old idea that really takes us back to this idea that black people aren’t human, they’re in fact three fifths of a human, which was at the founding of this country, right? But we’re reproducing that idea through technology. (Timestamp: 19:00)
While companies should be trying to do the right thing, it may help speed things up if there’s a financial incentive as well. And increasingly there is real liability resulting from failing to consider these problems. For instance, if your company produces an AI solution that’s found to be seriously biased, you not only lose business but may find yourself the subject of civil and government lawsuits.
Noble: I think that first of all, there’s a tremendous amount of risk by not taking up these issues. I’ve heard that the risk management profile, for example for a company like Facebook, in terms of harm, what they can’t solve with software and AI, that they use human beings, quite frankly to sort through, for example, the risk that they face is probably estimated around $2 billion, right?
If you’re talking about a $2 billion risk, I think then this is a decision that exceeds the design desires and software engineers. (Timestamp 24:25)
Not just bias but unintended consequences need to be considered, such as how an app or service may be abused in ways the creators might not have thought of.
Noble: I think you have to think far beyond, you know, like, what you can do versus what you should do, or what’s ethical and responsible to do and I think these conversations now can no longer be avoided. This is a place where founders, venture capitalists, everything, every VC in the Valley on Sandhill road should have a person who is responsible for thinking about the adverse effects of the products that they might invest in. (Timestamp: 25:43)
The tendency to “ship it and fix it” rather than include accessibility from the ground up is increasingly being questioned by both advocates and developers. Turns out it’s better for everyone, and cheaper in the long run, to do it right the first time.
Girma: The answer to most of these questions is have the people involved. ‘Nothing about us without us’ is the saying in the Disability Justice Movement, so if these seas and companies are thinking about investing in a solution that they think will be good for the world? Ask disability justice advocates, get us involved. (Timestamp: 29:25)
We need the VCs to also connect with Disability Justice advocates, and really find someone who has knowledge and background in accessibility and tech. Same thing for any company. All the companies should have, technology existing and tech in the process of being built, should be consulting on accessibility. It’s easier to make something accessible if you design for accessibility, rather than trying to make it accessible afterwards. It’s like having an elevator in a physical building. You don’t build the structure, and then think about adding an elevator. You think about adding an elevator before you design it. (Timestamp: 30:55)
Read the full transcript here.
Early Stage is the premier ‘how-to’ event for startup entrepreneurs and investors. You’ll hear first-hand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company-building: Fundraising, recruiting, sales, product market fit, PR, marketing and brand building. Each session also has audience participation built-in – there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get <a href=”http://techcrunch.com/events/tc-early-stage-2021-part-1?promo=tcarticle&display=true”>20 percent off tickets right here.
In its latest strike against online content it doesn’t control Russia is throttling Twitter. State agency Roskomnadzor said today it was taking the action in response to the social media not removing banned content, claiming it had identified more than 3,000 unlawful posts that have not been taken down — and warning it could implement a total block on the service.
However the action by the comms regulator to slow down all Twitter’s mobile traffic and 50% of desktop users in Russia appeared to have briefly taken down Roskomnadzor’s own website earlier today.
Reports also circulated on social media that Russian government websites, including kremlin.ru, had been affected. At the time of writing these sites were accessible but earlier we were unable to access Roskomnadzor’s site.
Goes without saying that of course Roskomnadzor blocked itself too instead of the one website that it planned to (but Twitter is of course online in Russia as evident from this very tweet). pic.twitter.com/DoxTLJmIxY
— Alexey Kovalyov (@Alexey__Kovalev) March 10, 2021
Из-за блокировки твиттера с помощью технологии Deep Packet Inspection может ухудшиться качество работы интернета в РФ в целом: https://t.co/DscmzobLUI pic.twitter.com/OMw9wf2qVq
— Рустем Адагамов (@adagamov) March 10, 2021
The stand-off between the state agency and Twitter comes at a time when Russia is trying to clamp down on anti-corruption protestors who are supporters of the jailed opposition leader, Alexei Navalny — who has, in recent weeks, called for demonstrators to take to the streets to ramp up pressure on the regime.
Roskomnadzor’s statement makes no mention of the state’s push to censor political opposition — claiming only that the content it’s throttling Twitter for failing to delete is material relating to minors committing suicide; child pornography; and drug use. Hence it also claims to be taking the action to “protect Russian citizens”. However a draconian application of speech-chilling laws to try to silence political opposition are nothing new in Putin’s Russia.
The Russian regime has sought to get content it doesn’t like removed from foreign-based social media services a number of times in recent years, including — as now — resorting to technical means to limit access.
Most notoriously, back in 2018, an attempt by Russia to block access to the messaging service Telegram resulted in massive collateral damage to the local Internet as the block took down millions of (non-Telegram-related) IP addresses — disrupting those other services.
Also in 2018 Facebook-owned Instagram complied with a Russian request to remove content posted by Navalny — which earned it a shaming tweet from the now jailed politician.
.@instagram decided to comply with Russian illegal censorship requests and deleted some content about oligarch Deripaska. Shame on you, @instagram! This content was spotlighted by our corruption investigation https://t.co/Pa4xVQE8MQ
— Alexey Navalny (@navalny) February 15, 2018
Although now behind bars in Russia — Navalny was jailed in February, after Russia claimed he had violated the conditions of a suspended sentence — the prominent Putin critic has continued to use his official Twitter account as a megaphone to denounce corruption and draw attention to the injustice of his detention, following his attempted poisoning last year (which has been linked to Russia’s FSB).
Recent tweets from Navalny’s account include amplification of an investigation by the German newspaper Bild into RT DE, the Russian state-controlled media outlet Russia Today’s German channel — which the newspaper accuses of espionage in German targeting Navalny and his associates (he was staying in a German hospital in Berlin at the time, recovering from the attempted poisoning).
Also, BILD erzaehlt hier eine Geschichte darueber, wie RT DE fuer Espionage benutzt wurde — und naehmlich, um mich und @pevchikh in Berlin nachzufolgen.
Wird diese Geschichte eine Fortsetzung haben? Wie lange wird RT in DE als Sender lizensiert bleiben?https://t.co/rFXESQ8aeW
— Leonid Volkov (@leonidvolkov) March 9, 2021
Slowing down access to Twitter is one way for Russia to try to put a lid on Navalny’s critical output on the platform — which also includes a recent retweet of a video claiming that Russian citizen’s taxes were used this winter by Putin and his cronies to fund yachts, whiskey and a Maldivian vacation.
Navalny’s account has also tweeted in recent hours to denounce his jailing by the Russian state following its attempt to poison him — saying: “This situation is called attempted murder”.
Эта ситуация называется покушение на убийство https://t.co/ytUl3Zgb1a
— Alexey Navalny (@navalny) March 10, 2021
At the time of writing Twitter had not responded to requests for comment on Roskomnadzor’s action.
However last month, in a worrying development in India that’s also related to anti-government protests (in that case by farmers who are seeking to reverse moves to deregulate the market), Twitter caved in to pressure from the government — shuttering 500 accounts including some linked to the anti-government protests.
It also agreed to reduce the visibility of certain protest hashtags.