Northrop Grumman hit a new milestone in extending the life of active spacecraft as a purpose-built spacecraft, MEV-2, docking with Intelsat’s IS-1002 satellite to give it another 5 years of life. It’s a strong demonstration of the possibilities in a growing field of orbital servicing operations.
MEV-2 launched in August and matched the orbit of Intelsat’s 18-year-old satellite, which would have soon be due for decommissioning, having exceeded its original mission by some 5 years. But it’s precisely this type of situation that the new “on-orbit service, assembly and manufacturing,” or OSAM, industry intends to target, allowing such satellites to live longer — likely saving their operators millions.
In last night’s operation, the MEV-2 spacecraft slowly approached IS-1002 and docked with it, essentially adding itself as a spare engine with a full tank. It will stay attached this way for five years, after which it will move on to its next mission — another end-of-life satellite, probably. (I’ve asked for a few more details along these lines.)
Last year the MEV-1 mission performed a similar operation, docking with Intelsat’s IS-901 and changing its orbit.
But in that case, the satellite was inactive and not in the correct orbit to return to service. MEV-1 therefore had a bit more latitude in how it approached the first part of the mission.
In the case of MEV-2, the IS-1002 satellite was in active use in its accustomed orbit, meaning the servicing spacecraft had to coordinate an approach that ran no risk of disrupting the target craft’s operations. Being able to service working satellites, of course, is a major step up from only working with dead ones.
And naturally the goal is to have spacecraft that could dock and refuel another satellite without hanging onto it for a few years, or service a malfunctioning part so that a craft that’s 99% functional can stay in orbit rather than be allowed to burn up. Startups like Orbit Fab aim to build and standardize the parts and ports needed to make this a reality, and Northrop Grumman is planning a robotic servicing mission for its next trick, expected to launch in 2024.
WeRide, the Chinese autonomous vehicle startup that recently raised $310 million, has received a permit to test driverless vehicles on public roads in San Jose, California. WeRide is the seventh company, following AutoX, Baidu, Cruise, Nuro Waymo and Zoox, to receive a driverless testing permit.
In the early days of autonomous vehicle development, testing permits required human safety drivers behind the wheel. Some 56 companies have an active permit to test autonomous vehicles with a safety driver. Driverless testing permits, in which a human operator is not behind the wheel, have become the new milestone and a required step for companies that want to launch a commercial robotaxi or delivery service in the state.
The California DMV, the agency that regulates autonomous vehicle testing in the state, said the permit allows WeRide to test two autonomous vehicles without a driver behind the wheel on specified streets within San Jose. WeRide has had a permit to test autonomous vehicles with safety drivers behind the wheel since 2017. WeRide is also restricted to how and when it tests these vehicles. The driverless vehicles are designed to operate on roads with posted speed limits not exceeding 45 miles per hour. Testing will be conducted during the day Monday through Friday, but will not occur in heavy fog or rain, according to the DMV.
To reach driverless testing status in California, companies have to meet a number of safety, registration and insurance requirements. Any company applying for a driverless permit must provide evidence of insurance or a bond equal to $5 million, verify vehicles are capable of operating without a driver, meet federal Motor Vehicle Safety Standards or have an exemption from the National Highway Traffic Safety Administration, and be an SAE Level 4 or 5 vehicle. The test vehicles must be continuously monitored and train remote operators on the technology.
Driverless testing permit holders must also report to the DMV any collisions involving a driverless test vehicle within 10 days and submit an annual report of disengagements.
While the vast majority of WeRide’s operations are in China, the permit does signal its continued interest in the United States. WeRide, which is headquartered in Guangzhou, China, maintains R&D and operation centers in Beijing, Shanghai, Nanjing, Wuhan, Zhengzhou and Anqing, as well as in Silicon Valley. The startup, which was founded in 2017, received a permit in February to operate a ride-hailing operation in Guangzhou.
The company is one of China’s most-funded autonomous vehicle technology startups with backers that include bus maker Yutong, Chinese facial recognition company SenseTime and Alliance Ventures, the strategic venture capital arm of Renault-Nissan-Mitsubishi. Other WeRide investors include CMC Capital Partners, CDB Equipment Manufacturing Fund, Hengjian Emerging Industries Fund, Zhuhai Huajin Capital, Flower City Ventures and Tryin Capital. Qiming Venture Partners, Sinovation Ventures and Kinzon Capital.
After it looked like Apple might no-show, the company has committed to sending a representative to a Senate antitrust hearing on app store competition later this month.
Last week, Senators Amy Klobuchar (D-MN) and Mike Lee (R-UT) put public pressure on the company to attend the hearing, which will be held by the Senate Judiciary Subcommittee on Competition Policy, Antitrust, and Consumer Rights. Klobuchar chairs that subcommittee, and has turned her focus toward antitrust worries about the tech industry’s most dominant players.
The hearing, which Google will also attend, will delve into Apple and Google’s control over “the cost, distribution, and availability of mobile applications on consumers, app developers, and competition.”
App stores are one corner of tech that looks the most like a duopoly, a perception that Apple’s high profile battle with Fortnite-maker Epic is only elevating. Meanwhile, with a number of state-level tech regulation efforts brewing, Arizona is looking to relieve developers from Apple and Google’s hefty cut of app store profits.
In a letter last week, Klobuchar and Lee, the subcommittee’s ranking member, accused Apple of “abruptly” deciding that it wouldn’t send a witness to the hearing, which is set for April 21.
“Apple’s sudden change in course to refuse to provide a witness to testify before the Subcommittee on app store competition issues in April, when the company is clearly willing to discuss them in other public forums, is unacceptable,” the lawmakers wrote.
By Monday, that pressure had apparently done its work, with Apple agreeing to attend the hearing. Apple didn’t respond to a request for comment.
While the lawmakers are counting Apple’s acquiescence as a win, that doesn’t mean they’ll be sending their chief executive. Major tech CEOs have been called before Congress more often over the last few years, but those appearances might have diminishing returns.
Tech CEOs, Apple’s Tim Cook included, are thoroughly trained in the art of saying little when pressed by lawmakers. Dragging in a CEO might work as a show of force, but tech execs generally reveal little over the course of their lengthy testimonies, particularly when a hearing isn’t accompanied by a deeper investigation.
As artificial intelligence becomes more advanced, previously cutting-edge — but generic — AI models are becoming commonplace, such as Google Cloud’s Vision AI or Amazon Rekognition.
While effective in some use cases, these solutions do not suit industry-specific needs right out of the box. Organizations that seek the most accurate results from their AI projects will simply have to turn to industry-specific models.
Any team looking to expand its AI capabilities should first apply its data and use cases to a generic model and assess the results.
There are a few ways that companies can generate industry-specific results. One would be to adopt a hybrid approach — taking an open-source generic AI model and training it further to align with the business’ specific needs. Companies could also look to third-party vendors, such as IBM or C3, and access a complete solution right off the shelf. Or — if they really needed to — data science teams could build their own models in-house, from scratch.
Let’s dive into each of these approaches and how businesses can decide which one works for their distinct circumstances.
Generic AI models like Vision AI or Rekognition and open-source ones from TensorFlow or Scikit-learn often fail to produce sufficient results when it comes to niche use cases in industries like finance or the energy sector. Many businesses have unique needs, and models that don’t have the contextual data of a certain industry will not be able to provide relevant results.
At ThirdEye Data, we recently worked with a utility company to tag and detect defects in electric poles by using AI to analyze thousands of images. We started off using Google Vision API and found that it was unable to produce our desired results — with the precision and recall values of the AI models completely unusable. The models were unable to read the characters within the tags on the electric poles 90% of the time because it didn’t identify the nonstandard font and varying background colors used in the tags.
So, we took base computer vision models from TensorFlow and optimized them to the utility company’s precise needs. After two months of developing AI models to detect and decipher tags on the electric poles, and another two months of training these models, the results are displaying accuracy levels of over 90%. These will continue to improve over time with retraining iterations.
Any team looking to expand its AI capabilities should first apply its data and use cases to a generic model and assess the results. Open-source algorithms that companies can start off with can be found on AI and ML frameworks like TensorFlow, Scikit-learn or Microsoft Cognitive Toolkit. At ThirdEye Data, we used convolutional neural network (CNN) algorithms on TensorFlow.
Then, if the results are insufficient, the team can extend the algorithm by training it further on their own industry-specific data.
A few months back, robotic process automation (RPA) unicorn UiPath raised a huge $750 million round at a valuation of around $35 billion. The capital came ahead of the company’s expected IPO, so its then-new valuation helped provide a measuring stick for where its eventual flotation could price.
UiPath then filed to go public. But today the company’s first IPO price range was released, failing to value the company where its final private backers expected it to.
In an S-1/A filing, UiPath disclosed that it expects its IPO to price between $43 and $50 per share. Using a simple share count of 516,545,035, the company would be worth $22.2 billion to $25.8 billion at the lower and upper extremes of its expected price interval. Neither of those numbers is close to what it was worth, in theory, just a few months ago.
According to IPO watching group Renaissance Capital, UiPath is worth up to $26.0 billion on a fully diluted basis. That’s not much more than its simple valuation.
For UiPath, its initial IPO price interval is a disappointment, though the company could see an upward revision in its valuation before it does sell shares and begin to trade. But more to the point, the company’s private-market valuation bump followed by a quick public-market correction stands out as a counter-example to something that we’ve seen so frequently in recent months.
Is UiPath’s first IPO price interval another indicator that the IPO market is cooling?
If you think back to the end of 2020, Roblox decided to cancel its IPO and pursue a direct listing instead. Why? Because a few companies like Airbnb had gone public at what appeared to be strong valuation marks only to see their values rocket once they began to trade. So, Roblox decided to raise a huge amount of private capital, and then direct list.