1作者: lorepieri27 天前原帖
TLDR: I am using AI&amp;more to make robotic teleoperation faster and sustainable over long periods, enabling large real robotic data collection for robotic foundational models.<p>We are probably 5-6 orders of magnitude short of the real robotic data we will need to train a foundational model for robotics, so how do we get that? I believe simulation or video can be a complement, but there is no substitution for a ton of real robotic data.<p>I’ve been exploring approaches to scale robotic teleoperation, traditionally relegated to slow high-value use cases (nuclear decommissioning, healthcare). Here’s a short video from a raw testing session (requires a lot of explanation!):<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;QYJNJj8m8Hg" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;QYJNJj8m8Hg</a><p>What is happening here?<p>First of all, this is true robotic teleoperation (often people confuse controlling a robot in line-of-sight with teleoperation): I am controlling a robotic arm via a VR teleoperation setup without wearing it, to improve ergonomics, but watching at camera feeds. Over wifi, with a simulated 300ms latency + 10ms jitter (international round trip latency, say UK to Australia).<p>On the right a pure teleoperation run is shown. Disregard the weird “dragging” movements, they are a drag-and-drop implementation I built to allow the operator to reposition the human arm in a more favorable position without moving the robotic arm. Some of the core issues with affordable remote teleoperation are reduced spatial 3D awareness, human-robot embodiment gap, and poor force-tactile feedback. Combined with network latency and limited robotic hardware dexterity they result in slow and mentally draining operations. Often teleoperators employ a “wait and see” strategy similar to the video, to reduce the effects of latency and reduced 3D awareness. It’s impractical to teleoperate a robot for hour-long sessions.<p>On the left an AI helps the operator twice to sustain long sessions at a higher pace. There is an &quot;action AI&quot; executing individual actions such as picking (the “action AI” right now is a mixture of VLAs [Vision Language Action models], computer vision, motion planning, dynamic motion primitives; in the future it will be only VLAs) and a &quot;human-in-the-loop AI&quot;, which is dynamically arbitrating when to give control to the teleoperator or to the action AI. The final movement is the fusion of the AI and the operator movement, with some dynamic weighting based on environmental and contextual factors. In this way the operator is always in control and can handle all the edge cases that the AI is not able to, while the AI does the lion share of the work in subtasks where enough data is already available.<p>Currently it can speed up experienced teleoperators by 100-150% and much more for inexperienced teleoperators. The reduction in mental workload is noticeable from the first few sessions. An important challenge is speeding up further vs a human over long sessions. Technically, besides AI, it’s about improving robotic hardware, 3D telepresence, network optimisation, teleoperation design and ergonomics.<p>I see this effort as part of a larger vision to improve teleoperation infra, scale up robotic data collection and deploy general purpose robots everywhere.<p>About me, I am currently head of AI in Createc, a UK applied robotic R&amp;D lab, in which I built hybrid AI systems. Also 2x startup founder (last one was an AI-robotics exit).<p>I posted this to gather feedback early. I am keen to connect if you find this exciting or useful! I am also open to early stage partnerships.
1作者: haya21_827 天前原帖
Show HN: Enfiy Code – Universal AI coding assistant with multi-provider support<p>Hi HN! I built Enfiy Code, a command-line AI coding assistant that works with multiple AI providers (Anthropic Claude, OpenAI GPT, Google Gemini, Ollama for local models, etc.) from a single interface.<p>Key features: • Switch between AI providers seamlessly - ompare responses from different models • Works with large codebases using extended context support • Supports both cloud AI (powerful) and local AI (private) via Ollama • Integrates external tools through MCP (Model Context Protocol) • Generate apps from PDFs&#x2F;sketches using multimodal AI • Auto-handles complex tasks like PR reviews and git operations<p>The CLI is built with TypeScript&#x2F;Node.js and is fully open source (Apache 2.0). You can try it without installing: `npx @enfiy&#x2F;enfiy-code`<p>What makes it different from other AI coding tools is the provider flexibility - you&#x27;re not locked into one AI service, and you can run everything locally if privacy is a concern.<p>Would love feedback from the HN community, especially on the multi-provider approach and MCP integrations!<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;enfiy&#x2F;enfiy-code">https:&#x2F;&#x2F;github.com&#x2F;enfiy&#x2F;enfiy-code</a>
1作者: ahaucnx27 天前原帖
I just created a quiz that reveals how corporate legal language can trap communities into losing control of their environmental data.<p>It is based on real terms and conditions typical for air quality monitoring manufacturers.<p>Try it and see if you can spot the red flags: <a href="https:&#x2F;&#x2F;www.airgradient.com&#x2F;aq-data-ownership-quiz&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.airgradient.com&#x2F;aq-data-ownership-quiz&#x2F;</a><p>The aim of the game is to create awareness among data ownership rights, e.g.:<p>- How &quot;joint ownership&quot; isn&#x27;t really sharing - Why &quot;free&quot; services often delete your data without warning - How subscription models hold your community&#x27;s air quality data hostage - The difference between data access and data ownership<p>The game was inspired by the incredible work of the EPIC Air Quality Fund, which is fighting to expand access to air quality data. Their research revealed that nearly 40% of countries lack open air quality data—often because restrictive corporate terms prevent communities from truly owning their environmental information.<p>At AirGradient, we believe open source hardware is the ultimate solution. When communities control both the hardware AND the data, they achieve true environmental justice.