2024-04-24-EB-7: The Oceanic Oracle

🔷 Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.

Here’s today at a glance:

 🌊 EB-7: The Oceanic Oracle


Hrishi Olickel (LinkedIn) is the Founder and Chief Technology Officer (CTO) at Greywing (YC W21), a Singapore firm that offers helps merchant shipping firms manage their fleets and crews.


Founded in 2019, Singapore-based Greywing was created to help ship operators and other members of the maritime industry make critical decisions. It includes tools for crew change management, predictive reporting of potential risks like piracy and pandemic-related travel restriction updates. [In October 2021] it raised $2.5 million in seed funding. Investors include Flexport, Transmedia Capital, Signal Ventures, Motion Ventures, Rebel Ventures, Y Combinator (Greywing was part of its winter 2021 batch) and Entrepreneur First.

  • AI Solutions: Greywing leverages large language models (LLMs) like GPT-3.5 and 4 to extract structured data from unstructured sources, automate communication, and provide intelligent assistance to shipping companies.

  • Ocean Oracle: This product showcases Greywing's innovative approach to indexing and retrieving information from complex engineering diagrams and schematics, addressing the need for on-board technical expertise.

  • Proteus Copilot: This AI assistant helps shipping companies manage their data, crews, and operations more efficiently.

  • Lumentis Project: This personal project, which gained significant attention, demonstrates the potential of LLMs for automatically generating documentation from meetings and conversations.


April 8th, 2024


  • On the market response to their AI products, “We'd go into have a meeting with a technical team and some other teams would hear about it and they'd come to us and go, hey, can we be part of the trial or can we be part of what you guys are doing?“

  • On the intelligence required for agents, “You can get far better results with a mediocre CEO and 100x the number of people under him than with a CEO with a 100x big brain. The agent at the top just needs to give out instructions and check the outputs.“

  • On the resistance of engineers to working with AI, “It felt like managing humans again... saying please and thank you and not getting deterministic output.“

  • On the surprising thing users most commonly ask the AI, “Most initial questions are meta - 'How do I use you?', not the task the system is for.“

  • On bridging response quality gaps with speed, "If you take eight seconds to come back to a user, they expect a significantly higher quality of answer or service than if you got back to them in 10 milliseconds. Because to a user, the system is thinking."

  • On how enterprises are like sleeping AIs waking up “Very high value technical expertise inside of a company can now be baked into systems because they already have the documentation“

  • On whether the cost savings from open source models are important “In our neck of the woods, where people are usually willing to pay for these things more. It's still a win, right? And you get it for free“

  • On the need for models that companies can control fully in terms of access to data “The next big jump for open source is going to be when these models get embedded client side.“

  • On how LLMs require less intelligence when connected to data rich enterprise systems “Agents can be dumber if your tools are better“

  • On the style of working in large companies “Humans that are just working through checklists and being kind of being forced to work in a lot of ways like computers do? They’re practically human APIs.“

Listen On

EB-7, the seventh episode of our podcast, dropped this week. Before I continue, the rules of the game are:

  • Pods that CHART stay alive

  • Pods that get a Follow on Apple Podcasts CHART

So FIRST, CLICK on the link below (opens up your Apple Podcasts app) and click “+Follow” (in the upper right-hand corner)

Then go ahead and listen to the podcast any way you want to on your preferred app, capiche mon ami?

Listen on


Hrishi and Greywing are driven by the desire to improve efficiency, safety, and decision-making in the commercial shipping industry. Their AI solutions address critical pain points, such as:

  • Data Accessibility and Usability: Extracting valuable insights from the overwhelming amount of unstructured data.

  • Crew Management: Optimizing crew rotations, ensuring compliance, and addressing the well-being of seafarers.

  • Technical Expertise: Providing access to critical information and troubleshooting support even with limited personnel on board.

  • Documentation Generation: Automating the time-consuming process of creating and maintaining accurate documentation.


  • Vertical SaaS Approach: Greywing combines existing AI technology with industry-specific knowledge to create tailored solutions for the shipping industry.

  • LLMs and Tooling: They utilize a variety of LLMs, including GPT-3.5, GPT-4, Claude, and open-source models, along with custom-built tools to enhance model capabilities.

  • Focus on User Experience: Greywing prioritizes user-friendly interfaces and clear communication to build trust and encourage adoption.

  • Client-Side Deployment: They are exploring the potential of running models client-side on user devices to improve data security and privacy.

What are the limitations and what's next

  • LLM Limitations: Hrishi acknowledges that LLMs still require human evaluation and refinement, especially in complex tasks.

  • Enterprise Adoption: Integrating AI solutions within existing systems and workflows can be challenging.

  • Open-Source Development: The ecosystem for open-source models and tools needs further development.

Greywing's future plans include:

  • Expanding Ocean Oracle's capabilities: Applying their visual indexing and retrieval technology to broader datasets and industries.

  • Client-side model deployment: Leveraging web assembly and browser-based runtimes to enhance data security and privacy.

  • Continued innovation: Exploring the potential of AI agents and other emerging technologies to further improve the efficiency and sustainability of commercial shipping.

Why It Matters

Greywing's work demonstrates the transformative potential of AI for traditional industries like commercial shipping. By harnessing the power of LLMs and other AI techniques, they are tackling long-standing challenges and paving the way for a more efficient, data-driven future. Their efforts have significant implications for:

  • Improving operational efficiency and cost savings for shipping companies.

  • Enhancing the safety and well-being of seafarers.

  • Promoting sustainable practices and reducing the environmental impact of shipping.

  • Demonstrating the value of AI for enterprise applications and real-world problem-solving.

Additional Notes

  • Hrishi shared interesting anecdotes about user interactions with their AI systems, highlighting the importance of user education and trust-building.

  • The interview explored the evolving landscape of LLMs and the challenges and opportunities associated with their deployment in enterprise settings.

  • Greywing's focus on user experience and client-side deployment positions them as a leader in the application of AI to the commercial shipping industry.



On what eventually became Ocean Oracle:

On the underlying technology:

3 useful things for LLM apps w customers, that no one seems to be doing:

+ more learnings from WalkingRAG

1. Stream processing: Use structured output to generate well-typed enums and short-circuits (more on that later), and libraries like json-stream to output token by token packets from text fields and other parts as they're generated. There's a huge difference between making customers wait 30 seconds foar a response, and 50ms for something to start happening. Every test we've run says the same.

As an example, SeaGPT sends plan token packets when it's planning, answer token packets when it's compiling an answer, toolStart packets to indicate a tool is about to be called, toolCall packets outlining the full parameters to the tool, and toolResponse packets with tool output. See this demo:

2. Short-circuiting: Embed parameters into your structured output that can tell you if you can end a response early - sometimes you've heard enough. If a labelling prompt is running on an empty page, have it check first so you don't need to run it.

3. Backpressure: Most APIs support this, so use it to terminate requests when they're not needed. Save tokens and user time. With backpressure, you can use cheaper models to run parallel prompts for the same task, and join the output - and terminate the right ones if you find a good path early.

A typical ship schematic stored by Ocean Oracle.

Documentation generation side project npx lumentis.

Generate beautiful docs from your transcripts and unstructured information with a single command.

A simple way to generate comprehensive, easy-to-skim docs from your meeting transcripts and large documents.

🌠 Enjoying this edition of Emergent Behavior? Send this web link with a friend to help spread the word of technological progress and positive AI to the world!

Or send them the below subscription link:

🖼️ AI Artwork Of The Day

Back when parents let their kids be kids - u/cmirsch from r/midjourney

That’s it for today! Become a subscriber for daily breakdowns of what’s happening in the AI world:

Join the conversation

or to participate.