Featured image of post Introducing PlanAI: Streamlining Task Automation with AI

Introducing PlanAI: Streamlining Task Automation with AI

Exploring how PlanAI enhances task automation using graph-based architectures and seamless LLM integration, featuring practical applications like textbook Q&A generation.

PlanAI is a framework I’ve developed to automate tasks involving LLMs. It uses a data-flow graph-based architecture that leverages Large Language Models (LLMs) as needed. For more information, you can visit PlanAI’s website or explore the GitHub repository. PlanAI is released under the Apache 2.0 license.

Key Features

PlanAI constructs a data-flow graph where tasks are organized into interconnected TaskWorkers. Execution is highly parallel, enabling high-performance computation, only limited by the data-flow dependencies in the graph. A significant design feature is input provenance, which tracks the lineage and origin of all tasks throughout the workflow.

The Role of Input Provenance

Input provenance allows each worker within the graph to access the complete history of the tasks that precede it. This functionality means that:

  • Data Efficiency: There’s no need for redundant data structures, as workers can directly access necessary information from previous tasks.
  • Task Coordination: Workers can combine results efficiently, which is especially useful for workflows with high fanout earlier in the graph, leading to more cohesive and streamlined processes.

Practical Application: Textbook Q&A Generation

An example application provided with PlanAI generates high-quality textbook question and answer (Q&A) pairs. This app, which can be explored in detail in the textbook app example, demonstrates PlanAI’s ability to manage complex workflows involving AI-driven content generation:

  • Complete Workflow: PlanAI processes textbook content to generate Q&A pairs, ideal for educational purposes or model training. It utilizes a series of AI-powered workers for tasks like text cleaning, relevance filtering, question and answer generation, and evaluation.
  • Parallel Processing: The framework allows parallel task execution while maintaining control over LLM API usage, ensuring efficiency and effectiveness.

You can explore the generated history Q&A datasets here: World History Since 1500 and World History to 1500.

Automating Prompt Optimization

PlanAI also facilitates automated prompt optimization using real production data. By turning on debug traces, applications can dynamically improve their prompts, ensuring better performance with the optimization goal being the only manual input required.

Conclusion

PlanAI is a flexible tool for streamlining task automation processes involving LLMs. Due to its use of input provenance and a data-flow-oriented graph-based system, it provides an efficient and organized way to manage a wide variety of tasks. While I built the framework to enable another project I am working on, it stands robustly on its own.

For anyone interested, I encourage exploring PlanAI’s GitHub repository for more technical details and examples.

Feel free to reach out with any questions or feedback.

Comparison with Alternatives

LangGraph is a similar, yet more evolved alternative to PlanAI. It offers a lot of advanced features for building complex, agent-driven applications. However, my motivation in developing PlanAI was to explore the design space differently. I wanted to emphasize a robust data-flow oriented system with full provenance tracking. Input provenance allows each component to access complete task histories, enabling more efficient and coordinated workflow processes. It also makes debugging a system easier.

The views expressed on these pages are my own and do not represent the views of anyone else.
Built with Hugo - Theme Stack designed by Jimmy