Update v0.2.7: 1-12-2025
- lead-engineer tool error fix - the reason for this release
- mlflow integration needs work but there are changes to start making this useful for tracking experiments
Update v0.2.5: 12-08-2024
- Add save button for prompt and member panel (sometimes they get out of sync)
- Team member names are now animals
Update v0.2.4: 12-08-2024
- Properly queue requests (inside the extension) and throttle (goal: make configurable)
- Finish making default configuration use anthropic.claude-3-5-haiku-20241022-v1:0 for team member roles
Update v0.2.3: 12-08-2024
- Change to anthropic.claude-3-5-haiku-20241022-v1:0
Update v0.2.1: 12-07-2024
- Focus on Anthropic
- Full config out of the box for Bedrock hosted models defaulting to us-west-2
- Use stability.stable-image-core-v1:0 for graphic artist
- File locking fix
- Suppress useless error messages
Update v0.1.6: 10-13-2024
- Better event handling for saving prompts and setups
- >Wrapper prompt for dall-e-3
- Additional lead-architect prompt
Update v0.1.5: 10-01-2024
- Add a new Team Member and make them a Graphic Artist
- Choose the dall-e-3 model
- This is a new feature there is no validation and no instruction
I named my Graphic Artist Jenny. When I described what I wanted I also included the follow to explain Jenny's limitations and how I wanted the lead-architect to handle her assignments.
Make sure to ask Jenny to make graphics. Jenny's instructions must be very specific. You can only ask her to make one graphic file at a time and you can only describe what you want her to make in the prompt. A prompt for Jenny should be short, for example: "make me a small icon file that looks like a frog." Jenny just returns the path to the file she made. You need to work around her limitations. As the lead-architect plan out what you need from Jenny first and then tell the others what to do with what you had Jenny create!
Update v0.1.2: 09-08-2024
- NEW TOOL: Code Search - If you want to make a change that can affect multiple files there is a new tool the LLM can use to search the code of the solution.
- Better tool call error handling
Update v0.1.1: 09-08-2024
- mlFLow experiments for tracking prompts
- mlFLow config in .vscode/frogteam/config.json
- moved frogteam files to .vscode/frogteam/
- Fixed projects.jsonb file
- Gave answer tab state
Update v0.1.0: 09-07-2024
- mlFLow experiments early setup mlFlow only work from localhost:5001
- Fixed webview post message events
- updated member and prompt tree items
Updates v0.0.19: 09-01-2024
- Bug Fix: task-summary type missing
- Bug Fix: Duplicate and Delete buttons we erring
- Updated included prompts
Updates v0.0.18: 08-30-2024
- Project level at the top of the history tree
- Project tracked in history entry
- Multi-Root Workspace Compatibility
- Task a member directly from their configuration page
- CSS and JS Webview consolidation
- Proper content security policies
Updates v0.0.17: 08-25-2024
- Commands - a top level menu entry
- History hierarchy Changes
- Toggle History Grouping (See "Commands")
- Parent/Child elements but a flat tree
- This means that child elements appear under their parents and also in the root of the tree
- Respond to Answers directly
- In the History's Answer panel when the response is Markdown there is a "Respond Here" button
- When using this feature relevant immediate history will be included in the new LLM interaction
- The Builder now collects a project name and directory
- This information is used to format XML that is used in the prompt
- This tells the LLM exactly what its getting
- System prompts will be adjusted in future versions
- The next version will use "Project Name" in the history hierarchy
Updates v0.0.16: 08-14-2024
- Azure OpenAI
- Upgrade Axios due to vulnerability report
- Added some notes to the member setup panel
Updates v0.0.15: 08-13-2024
- Tagging History Entries
- Updated History Display
Updates v0.0.14: 08-10-2024
- GitHUb Repo Changes
Updates v0.0.13: 08-10-2024
- Moving to GitHub Team: https://github.com/SpiderInk/frogteam
- Lead Architect can use all implemented models.
- Added Running Status Indicator in the StatusBar says "Frogteam" when its a project run and "member-name" when its a directed run.
- Added an Output Channel called "FrogTeam.ai" that updates with every history entry and on other events.
- Added New Member and Prompt commands to the project view to make these actions more visible.
- Added Error message to tell yu when a team member has no aligned system prompt.
- New prompt for requesting task/project summary
- Wildcard prompts
- import new prompts
- API Key from Environment Variable
08-08-2024
Hi - Thanks for stopping by!
I decided to put this out there. Its at a nice spot where there is some functionality. The idea is to create team members that are represented by a specific LLM. You can use a variety of different LLM's and overtime how members are selected and how assignments are made will evolve from the rudimentary state I have it in today. You can use AWS Bedrock models and OpenAI models and for now I will likely stay inside these boundaries for LLM selection:
- The model supports tool calling
- The model and its tool calling feature are supported by langchain
I am currently focusing on some UI features while I enhance/refine my tool calling chain. I hope soon to move on to a system prompt sharing feature and eventually I would like to integrate RAG with local vectors. I hope then to turn around and use my extension to develop my next mobile app, whatever that may be.
I am wondering if there is an appetite for what I am doing here. Let me know your thoughts.
Here is a short demo video.