Technology
Lofty Dreams: How The Flying Taxi May Finally Realize Our Desire for the Flying Car
Published
6 years agoon
If you thought that the future of transportation was just electric cars and autonomous vehicles, well, there’s a push to take things a little higher.
Certainly, gasoline-free, self-driving cars are all the rage right now, and rightfully so. We are deep into the testing phase of cars that reach level four automation (level five means they are fully autonomous).
However, other transportation technologies are aiming to leave the road behind and take occupants above the fray of cars and traffic, delivering them to their destinations through the air versus across the ground.
While the promise of the flying car introduced in Back to the Future Part II may have missed the mark by a few years, the next decade will see a revolution unlike any since humans first took flight.
What is a Flying Taxi?
Traditionally, the term flying taxi is often confused with established transportation services known as Air Taxis. The latter evokes smallish airplanes or helicopters that shuttle occupants short distances, city to city, usually from one airport to another.
The modern iteration on a flying taxi, however, takes the terminology of a short-haul flight to a whole new level.
What makes the flying taxi concept both unique and potentially viable in a modern setting is the ability for the aircraft to take off and land anywhere – no airport is necessary.
Thanks to vertical ascent and descent capabilities the aircraft currently being tested are more akin to helicopters, but the design isn’t merely limited to well-known methods of flight. In fact, some prototypes now resemble oversized drones and gondola cabs with an array of small rotors attached to the roof.
Many of the designs carry only a handful of riders – from as few as two up to between five and seven, not including the pilot for the non-autonomous concepts. Indeed crewless flight is still one to two decades away, but much like the driverless automobile the drive for flying taxis to one day be pilotless is an aggressive one.
The small size though is the key to the technology proving a significant addition to an already crowded transportation network. So too is the plan for many of these crafts to be electric, eliminating the noise and nuisance of a gas powered engine.
In rising above gridlocked avenues and streets, flying taxis would utilize every aspect of the urban setting. From the ground level (in some areas) to the airspace in between or just above a cities mid and high rise buildings to the rooftops of those same structures, the tech would undoubtedly make the most of its operational field. Most proposals call for those rooftops to transition into launch and landing pads for the taxi network.
An actual airborne taxi service to get occupants from point A to point B within a densely packed city won’t just stop at the city limits though. There are also plans that would expand that reach, flying short haul, low occupant flights between closely networked cities.
Places within an hour’s drive of each other such as Dallas to Fort Worth or Baltimore to Washington DC are obvious candidates. However, taxi flights also offer the opportunity to bridge locales like Boston and New York or Los Angeles and San Francisco.
More Than Just Flying Cars
While on paper the whole enterprise seems ridiculously cool and simple enough, the reality is something different.
Uber, the peer to peer ridesharing behemoth, is one of the most visible players in the race to get the flying taxi up and running with its Elevate and UberAir programs.
In partnership with space agency NASA, Uber is working towards their taxis taking flight in 2020 in Dallas-Fort Worth, Los Angeles, and Dubai. It’s an aggressive goal considering that Uber remains in the design phase and have yet to produce an actual working, to scale prototype.
But they are undeterred.
Jeff Holden, head of product at Uber, has said, “there’s been a great deal of progress that’s been hard to see from the outside because a lot of this is just hard work at the drafting table.”
He goes on the to note, “we feel really good. It’s been a really interesting process getting our vehicle manufacturing partners aligned on performance specifications so that they’re building vehicles that align with what we need to make Elevate successful. So lots of good progress there.”
Expanding upon the ideas of their uncrewed traffic management protocols or UTM, NASA helps to nail down the infrastructure side of the endeavor.
The UTM system is currently helping to corral the unruly nature of the growing drone industry. In theory, NASA’s UTM would lead to the creation of an entirely new system of air traffic control to guide the taxi flights.
Although the push for localized flying transports has yet to generate the same publicity as that of their earthbound automobile counterparts, Uber is far from the only player in the field. More than 15 different companies are working towards similar goals, and in many cases, a lot of investment dollars are flowing into these efforts to get them off the ground.
For example, Kitty Hawk is a startup owned and fully funded by Larry Page, co-founder of Google. Kitty Hawk is currently testing a recreational hovercraft in New Zealand meant to dovetail into their flying taxi program over the next three years.
Others companies wanting to get in on the action include aviation heavyweights Boeing and Airbus.
Boeing bought Aurora Flight Service Corporation late last year to give both their commercial and military programs in electric and autonomous flight a shot in the arm. Greg Hyslop, the Chief Technology Office for Boeing noted the deal reflects that the “the aerospace industry is going to be changing” and Boeing is aiming to be ready “for whatever that future may be.”
For their part, Airbus made a similar deal, with an investment in startup Blade, which already boasts a charter flight business that is, ironically enough, often cast as the Uber of charters. This in addition to Airbus’ in-house Vahana program.
Elsewhere, showing off at CES 2018 in Las Vegas, was an 18 rotor vehicle called the Volocopter, that until recently was flying around in the futuristic desert playground of Dubai, running test flights.
Straight out of a sci-fi movie, the Volocopter is a German designed pilotless drone that one must really see to believe and appreciate.
Dubai also has a partnership with Chinese firm EHang, whose own ambitions for flying taxis stems from the automation and delivery via drone aircraft of organ transplant materials.
Even part and component manufacturers are playing a pivotal role in making the sci-fi of flying vehicles real.
British engine maker Rolls-Royce has a propulsion system in development for use in flying taxis. They hope to have it available sometime within the next decade.
And yes, some auto manufacturers are getting into the game with Porsche in the early stages of exploring the possibility.
Just How Viable Is A Flying Vehicle?
As with any new technology, growing pains exist. Flying cars are no different. There will almost certainly be a level of turbulence before the population fully embraces the latest tech and its scalable for the masses.
Consider the now ubiquitous iPhone is less than 12 years old and was once a curiosity. The prevalence and the advancements of the device made in just over a decade are definitely remarkable. The hope is that a flying taxi can follow a similar fast-track path to success.
Of course, airborne taxis are a completely different realm. As much as humanity is yearning to see a car fly – and practically – it’s another thing when you ask those same people to take a ride. It will require a convincing sales pitch for commuters to trust a machine that has onboard parachutes as part of its standard equipment.
However, with cities more crowded and street-level gridlock a constant complaint of urban dwellers, it’s not difficult to envision city skies filled with swarms of on-demand taxis.
The CEO of Volocopter, Florian Reuter summarizes the ease of use autonomous flight offers. “Implementation would see you using your smartphone, having an app, and ordering a volocopter to the next voloport near you. The volocopter would come and autonomously pick you up and take you to your destination,” he said.
Discounting that level of simplicity and convenience is hard.
As cool as it all sounds, flying taxis – even with actual testing happening as we speak – remain a construct of the future. We noted that many of the target dates for these aerial taxi programs run between 2020 and 2030. For some, those timelines are highly ambitious.
Even those whose entire reputation derives from their lofty ambitions.
Elon Musk mused to Bloomberg during a recent interview his thoughts on flying cars, and it was less than favorable. “Obviously, I like flying things. But it’s difficult to imagine the flying car becoming a scalable solution,” he said.
Uber’s Holden, however, disagrees. “We’ve studied this carefully and we believe it is scalable,” he noted, also casting Musk’s comments as “off the cuff” and “random.”
Final Thoughts
Regardless of if it can actually happen anytime within the next few years, many are banking on it simply being a matter of time before we are living with the daily sight of flying taxi services buzzing over our heads.
While the initial product may prove a bit different from the original vision, few will argue should one of the longest held fantasies of future progress finally come true.
Written by – Anna Kučírková
You may like
Artificial Intelligence
Google Unveils Veo 2: A New Era of AI Video Generation!
Published
1 day agoon
December 25, 2024Google has made significant strides in the field of AI with the introduction of its latest video generation model, Veo 2. Designed to rival OpenAI’s Sora, Veo 2 promises to deliver hyper-realistic, high-quality videos in 4K resolution, marking a notable advancement in AI-generated content.
Key Features of Veo 2
- Realistic Motion: Veo 2 excels in generating videos with natural and fluid movements, simulating real-world physics and human dynamics. This improvement allows for more lifelike representations in generated videos.
- High-Quality Output: The model produces stunning visuals with intricate details and vibrant colors, enhancing the overall viewing experience. Users can expect videos that not only look good but also convey a sense of realism.
- Benchmark Performance: Google claims that Veo 2 outperforms other leading video generation models based on human preference evaluations. In head-to-head comparisons, it was preferred by 59% of participants over OpenAI’s Sora, which garnered only 27%.
- Extended Video Lengths: Unlike many competitors, Veo 2 can generate videos longer than two minutes, significantly enhancing its utility for creators looking to produce more comprehensive content.
Advanced Capabilities
Veo 2 is integrated into Google Labs’ video generation tool, VideoFX, and includes several advanced features:
- Cinematic Effects: Users can specify cinematic jargon such as lens types and shot angles (e.g., low-angle tracking shots or close-ups), allowing for tailored video outputs that meet specific creative requirements.
- Complex Scene Generation: The model can process complex requests, including genre specifications and cinematic effects, making it versatile for various applications from entertainment to education.
Imagen 3 and Whisk: A Powerful Image Creation Duo
Alongside Veo 2, Google has introduced two additional models:
- Imagen 3: This versatile image generation model is capable of producing a wide range of styles, from photorealistic to abstract. It has been improved to deliver brighter and better-composed images.
- Whisk: This new experimental tool allows users to create new images by combining multiple input images, enabling unique output styles and creative possibilities.
Addressing Challenges in AI Video Generation
While these advancements are impressive, challenges remain in creating complex scenes with intricate motion and maintaining consistency throughout a video. Google acknowledges these hurdles but is committed to ongoing research and development to enhance the capabilities of its AI models further.
Safety Measures
To combat misinformation and ensure proper attribution, all videos generated by Veo 2 will include a visible and invisible watermark called SynthID. This feature is part of Google’s commitment to responsible AI development, helping to identify AI-generated content and mitigate potential misuse.
Future Prospects
As these tools become more accessible, they have the potential to revolutionize various industries, including entertainment, advertising, and education. The integration of Veo 2 into platforms like YouTube Shorts is planned for 2025, further expanding its reach and impact.
Conclusion
Google’s introduction of Veo 2 marks a significant leap forward in AI video generation technology. With its ability to produce high-quality, realistic videos and advanced cinematic capabilities, Veo 2 is set to reshape content creation across multiple sectors. As Google continues to innovate in this space, the future of AI-generated content looks promising—provided that ethical considerations are prioritized alongside technological advancements.
Artificial Intelligence
Microsoft’s New Phi-3.5 Models: A Leap Forward in AI!
Published
5 days agoon
December 21, 2024Microsoft has made significant strides in the field of AI with the release of its new Phi-3.5 models. This series includes Phi-3.5-MoE-instruct, Phi-3.5-mini-instruct, and Phi-3.5-vision-instruct, which demonstrate impressive performance, surpassing industry benchmarks and rivaling models from leading AI companies like OpenAI, Google, and Meta.
Key Highlights of the Phi-3.5 Models
- Phi-3.5-MoE-instruct: This powerful model features 41.9 billion parameters, excelling in advanced reasoning tasks and outperforming larger models such as Llama 3.1 and Gemini 1.5 Flash. It supports multilingual capabilities and can process longer context lengths, making it versatile for various applications.
- Phi-3.5-mini-instruct: A lightweight yet potent model with 3.8 billion parameters, it demonstrates strong performance in long-context tasks, outperforming larger models like Llama-3.1-8B-instruct and Mistral-Nemo-12B-instruct-2407. This model is optimized for quick reasoning tasks, making it ideal for applications such as code generation and logical problem-solving.
- Phi-3.5-vision-instruct: With 4.15 billion parameters, this model excels in visual tasks, surpassing OpenAI’s GPT-4o on several benchmarks. It can understand and reason with images and videos, making it suitable for applications that require visual comprehension, such as summarizing video content or analyzing charts.
Open-Sourcing the Future of AI
Microsoft’s commitment to open-sourcing these models aligns with its vision of democratizing AI technology. By making these models available on Hugging Face under an MIT license, Microsoft empowers researchers and developers to build innovative AI applications without the constraints typically associated with proprietary software.
The Phi-3.5 models have the potential to revolutionize various industries, including healthcare, finance, and education. Their advanced capabilities can help automate tasks, improve decision-making processes, and enhance user experiences across different platforms.
Advanced Features
One of the standout features of the Phi-3.5 series is its extensive context window of 128,000 tokens, which allows the models to process large amounts of data effectively. This capability is crucial for real-world applications that involve lengthy documents or complex conversations, enabling the models to maintain coherence over extended interactions.
The training process for these models was rigorous:
- The Phi-3.5-mini-instruct was trained on 3.4 trillion tokens over a span of ten days.
- The Phi-3.5-MoE-instruct required more extensive training, processing 4.9 trillion tokens over 23 days.
- The Phi-3.5-vision-instruct was trained on 500 billion tokens using a smaller training period of six days.
These extensive training datasets comprised high-quality, reasoning-dense public data that enhanced the models’ performance across numerous benchmarks.
Conclusion
As AI continues to evolve, Microsoft’s Phi-3.5 models are poised to play a crucial role in shaping the future of technology by offering smaller yet highly efficient solutions that outperform larger counterparts in specific tasks. By focusing on efficiency and accessibility through open-source initiatives, Microsoft is addressing the growing demand for powerful AI tools that can be deployed in resource-constrained environments as well as large-scale cloud settings.
The introduction of these models not only signifies a leap forward in AI capabilities but also challenges traditional notions about model size versus performance in the industry, potentially paving the way for more sustainable AI development practices in the future.
Artificial Intelligence
Apple Voice Memos Gets a Major Boost: AI-Powered Layered Recording on iPhone 16 Pro!
Published
1 week agoon
December 19, 2024Apple is revolutionizing the way we create music and podcasts with a groundbreaking update to the Voice Memos app on the iPhone 16 Pro series. The introduction of AI-powered layered audio recording in the iOS 18.2 update allows users to effortlessly combine multiple audio tracks directly on their iPhones, making it an invaluable tool for musicians, podcasters, and content creators.
Key Features of Layered Recordings
The new Layered Recordings feature enables users to:
- Record Vocals Over Instrumental Tracks: Users can play their music through the iPhone’s speakers while simultaneously recording their voice. This feature allows for capturing professional-quality audio without the need for external equipment, making it highly accessible for creators on the go.
- Create Complex Audio Projects: The ability to layer multiple tracks of vocals, instruments, and sound effects empowers users to build intricate compositions directly on their devices.
- Edit and Mix Audio: Advanced editing tools are available within the app, allowing users to fine-tune their recordings and apply professional-grade effects. This makes Voice Memos a powerful alternative to traditional studio setups.
Advanced Technology Behind the Feature
Powered by the A18 Pro chip and advanced machine learning algorithms, Voice Memos can intelligently isolate vocals from background noise, ensuring crystal-clear recordings. This technological advancement enhances the quality of audio captured, making it suitable for professional use.
Apple has showcased this feature with the popular Christmas song “Maybe This Christmas,” recorded by Grammy Award winners Michael Bublé and Carly Pearce, highlighting the practical applications of Layered Recordings in real-world scenarios.
Exclusive Availability
Currently, this powerful tool is exclusive to the iPhone 16 Pro and iPhone 16 Pro Max, emphasizing Apple’s commitment to pushing the boundaries of mobile creativity. The app’s capabilities are designed specifically for these models, leveraging their superior hardware to deliver enhanced performance. Users on other models, including the base iPhone 16 or iPhone 16 Plus, will not have access to this feature due to hardware limitations.
Broader Implications for Content Creation
The upgrade to Voice Memos represents a significant shift in how content creators can work. By enabling high-quality recording directly on their devices, Apple is catering to a growing demographic of musicians and podcasters who require flexibility and efficiency in their creative processes. This update not only enhances productivity but also democratizes access to high-quality audio recording tools.
Conclusion
With the introduction of AI-powered layered recording in Voice Memos on the iPhone 16 Pro series, Apple has set a new standard for mobile audio production. The combination of advanced technology, user-friendly features, and professional-grade capabilities positions Voice Memos as an essential tool for anyone looking to create music or podcasts on the go. As AI technology continues to evolve, we can expect even more exciting advancements that will further empower creators in their artistic endeavors.
Recent Posts
- Flipkart Partners with NCERT to Boost Textbook Accessibility
- Paper Boat Apps Collaborates with Moonbug Entertainment to Enhance Kiddopia’s Content
- Ola Dashes into 10-Minute Food Delivery
- Starlink’s Shadow Over Manipur: A New Frontier in Insurgency
- Shein’s India Comeback: A Strategic Partnership with Reliance
- Athera Venture Partners Secures Major Investment from HDFC AMC
- Google Unveils Veo 2: A New Era of AI Video Generation!
- Infosys Invests in 4baseCare to Boost Healthcare Tech Offerings!
- Amazon Partners with Startup India to Boost Startup Growth!
- ‘Chai and Samosas’: US Hotels Cater to Indian Tourist Surge to Revive Revenue!
- HCLTech Appoints Arjun A. Sethi as Chief Growth Officer for Strategic Segments!
- Wipro Appoints Insider Omkar Nisal as Europe CEO!
- Microsoft’s New Phi-3.5 Models: A Leap Forward in AI!
- Swiggy Takes on Zomato with New ‘Scenes’ App for Live Events!
- Inkers Technology Raises $3 Million to Revolutionize Construction with AI!
- DigiBoxx Partners with Arctera to Enhance Cloud Backup Solutions for Indian Firms!
- Ola’s Head of HR Steps Down Amid Wave of Leadership Exits!
- Bhuvan Bam Becomes Co-Founder of Peppy, a Leading D2C Sexual Wellness Brand!
- Sony Confirms Interest in Acquiring FromSoftware Parent Kadokawa!
- Apple Voice Memos Gets a Major Boost: AI-Powered Layered Recording on iPhone 16 Pro!