常春藤 – 将大型语言模型(LLMs)带给埃塞俄比亚3500万离线学生
我是一名位于亚的斯亚贝巴的开发者。在云计算的人工智能浪潮中,我所在地区的3500万学生却被抛在了后面,因为他们没有可靠的互联网连接,也无法承担使用GPT-5 API的数据费用。
为了解决这个问题,我开发了Ivy。这是一款旨在完全在设备上运行的教育辅助工具(边缘推理)。
它的工作原理如下:
离线推理:在150美元的入门级安卓硬件上运行优化的本地大语言模型(LLM)。
本地支持:集成了本地语言(阿姆哈拉语)的逻辑,以弥合文化差距。
自主性:该系统设计为在电网或网关故障时仍能正常工作。
我已经记录了系统架构以及在真实环境中运行的4分钟演示,您可以在这里查看:https://builder.aws.com/content/39w2EpJsgvWLg1yI3DNXfdX24tt/aideas-ivy-the-worlds-first-offline-capable-proactive-ai-tutoring-agent
我目前正在参加全球AWS挑战赛的四分之一决赛,以获取所需资源以实现规模化。我希望能获得关于边缘推理逻辑和这些设备在接收到2G信号时的同步协议的技术反馈。
欢迎提问有关在零连接环境中构建大语言模型基础设施的实际情况。
查看原文
I'm a developer based in Addis Ababa. While the AI Boom is happening in the cloud, 35 million students in my region are being left behind because they have zero reliable internet and can't afford the data costs of hitting a GPT-5 API.<p>I built Ivy to solve this. It’s an educational co-pilot designed to run entirely on-device (Edge-Inference).<p>How it works:<p>Offline Inference: Optimized local LLMs running on $150 entry-level Android hardware.<p>Native Support: Integrated logic for local languages (Amharic) to bridge the cultural gap.<p>Sovereignty: The system is designed to function when the grid or the gateway is down.<p>I’ve documented the architecture and a 4-minute demo of the system running in a real-world environment here: https://builder.aws.com/content/39w2EpJsgvWLg1yI3DNXfdX24tt/aideas-ivy-the-worlds-first-offline-capable-proactive-ai-tutoring-agent<p>I’m currently in the quarter-finals of a global AWS challenge to get this the resources it needs to scale. I’m looking for technical feedback on the edge-inference logic and the sync protocols for when these devices do hit a 2G signal.<p>Happy to answer questions about the reality of building LLM-infrastructure in a zero-connectivity environment.