Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

3 contributions to AI Developer Accelerator
Updated Claude Project system prompt reflecting Anthropic's latest publication on default instructions
Hi, sharing my new, updated Claude project system prompt. This revision incorporates insights from Anthropic's recent disclosure, enhancing our understanding of Claude's core functionality and behavior. https://gist.github.com/bartolli/5291b7dd4940b04903cae6141303b50d#file-updated-template
2
0
Put your Claude Project on steroids: A small tip as an addition to Brandon's 'New Claude 3.5 Crash Course' video
I've been working on a complex project with a GraphDB ontology and found a helpful approach. Setting a system prompt for Claude's Project has significantly improved my workflow. I use Claude's console to generate this prompt, which I then add to the project. Here's a link to check out the two templates: one is the SYSTEM_PROMPT for Claude's Project, the other is a template for a PROJECT_BLUEPRINT. https://gist.github.com/bartolli/5291b7dd4940b04903cae6141303b50d Here's the workflow I use to generate these: 1. I import the project requirements or brief into Claude and ask to refine and adjust it to my liking. 2. Once I am happy with the brief, I ask Claude to generate a system prompt. 3. I copy the result when I am satisfied, then I move to Claude Console and go to Generate Prompt, paste this and get the response back. 4. The output is my system prompt file. Run this prompt in the console for a workbench evaluation and fine-tuning, adjust to your liking. Copy the prompt. 5. I go back to my Claude Project and click Edit on the right-hand panel, right below the progress bar of the files storage where it says "Custom Instructions will change how Claude behaves and ..." - I paste my system prompt here. 6. I upload the PROJECT_BLUEPRINT. It's important to mention that using the XML tags and curly brackets somehow made Claude respond faster. I assume that those tags give Claude direct access to the uploaded artifacts without having to perform any RAG operations, which is also token-friendly. A key feature of this prompt is how it guides Claude in choosing appropriate artifact types. It instructs Claude to use Markdown tables for simple comparisons, LaTeX for complex tabular data, Mermaid diagrams for processes and relationships, code blocks for executable code, and SVG for custom graphics. This ensures that information is presented in the most effective and clear manner. Now you're ready to start querying Claude.
9
12
New comment Sep 13
Put your Claude Project on steroids: A small tip as an addition to Brandon's 'New Claude 3.5 Crash Course' video
1 like • Aug 25
I haven't used ClaudDev myself, but from what I can see, it's an agentic application. Agent workflows typically involve a chain-of-thought process, which means a lot of back-and-forth over API. This can get expensive quickly. Every time Claude needs to access a file or directory, it requires another API call, which doesn't seem very efficient to me. Instead, I've been using the VS Code plugin called Continue, and it's been fantastic. I highly recommend it. One of its great features is that you can configure different models for different tasks. For example, I use local models for autocompletion and embeddings, while I use Claude 3.5 for chat functionality. You should definitely check it out. https://github.com/continuedev/continue docs: https://docs.continue.dev/how-to-use-continue
0 likes • Aug 25
Oh, I see. The "tokens in" are actually the ones eating up your limit. The "tokens out" are Claude's responses. I'd recommend trying out Perplexity with Claude 3.5 Sonnet. It has an excellent workflow and cleverly carries over the pasted files throughout the session. The UI is user-friendly too. If you'd like, give me a prompt and I can run it with Perplexity Pro and share a screenshot with you here. Here's how I use it with Claude sometimes.
Long term memory for conversations
I’m testing different ways to maintain a “long-term” memory in conversational chatbots. From your experience, which methods have given you the best results? Eg. RAG/Vector Database + the last X messages in the prompt, …
2
4
New comment Aug 23
0 likes • Aug 22
When you build and intelligent RAG system is a good practice to consider the following aspects: Dynamic Session Management efficiently handles multiple concurrent conversations, tracking both active and historical sessions. It optimizes resource allocation while maintaining crucial contextual information. The Dual-Storage approach implements a two-tiered system that stores raw conversation data as individual document pairs, each associated with a unique session ID, while also creating and maintaining quick-access summaries for efficient information retrieval. Intelligent Summarization generates concise summaries of past conversations, enabling rapid context recall without the need to parse entire chat logs. Context-aware retrieval functions as an attentive system that recalls past interactions and surfaces relevant information, enhancing the coherence and personalization of each interaction. Asynchronous processing handles resource-intensive tasks in the background, ensuring responsive conversations even during peak system activity. Implementation Highlights The initialization process prepares the conversational environment by setting up necessary components, including user ID, session ID, and other critical parameters. Periodic processing employs a background task to identify and process inactive sessions, creating summaries and clearing processed sessions from active memory. The summarization engine utilizes an agent-based approach to distill the essence of lengthy conversations into concise, informative summaries. Vector-based retrieval implements a high-performance information retrieval system using vector representations for quick and relevant data access. Optimization Strategies Memory-Index integration creates a unified reference system that integrates personal interaction history with general knowledge for rapid information access. Hierarchical indexing organizes information in a structured, tree-like format for efficient navigation from broad categories to specific details.
0 likes • Aug 22
I forgot to mention something. In my setup, I've created an enhanced retrieval mechanism that extends the base retrieval. The EnhancedRetriever integrates multiple data sources: knowledgebase, research documents from trusted sources, medical database, and conversation history. It queries these sources sequentially, triggering a LangGraph-based workflow for recent research if initial results from research documents relevance score falls below a threshold. The retriever then combines and scores all results, ensuring rich, contextual information for complex conversations. You can apply similar approach to enhance the results fetching data from other external or internal sources
1-3 of 3
Angel Bartolli
2
5points to level up
@angel-bartolli-4159
Passionate thinker and designer based in Florida, deeply interested in blending user experience (UX) design with technology.

Active 56d ago
Joined Aug 21, 2024
powered by