Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Data Alchemy

Public • 22.6k • Free

ChatGPT Users

Public • 11.4k • Free

Senior Dev Academy

Public • 695 • Free

Android Devs

Private • 988 • Free

The Launchpad at BAIS CAMP

Public • 823 • Free

Generative AI

Public • 240 • Free

The 4 Hour AI Workweek

Public • 984 • Free

AI Automation Agency Hub

Private • 53.8k • Free

Brendan's AI Community

Public • 6k • Free

22 contributions to Generative AI
Llamafile: Another Option in Local LLM
I'm a huge fan of Ollama, but I recently revisited Llamafile after checking it out about six months ago. The improvements, especially in CPU inference, are quite impressive. If you're exploring local LLM solutions, you might find Llamafile worth checking out. Llamafile unique characteristics, coupled with single file 'no install' execution—user-friendly interfaces—versatile CLI, GUI, and server options—cross-platform compatibility—and built-in Open API capability, position it as a strong choice for experimenting with or implementing local LLMs. GitHub: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile) Introduction by Stephen Hood and Justine Tunney: [YouTube Video](https://www.youtube.com/watch?v=-mRi-B3t6fA)
2
0
Llamafile: Another Option in Local LLM
Ollama library for running LLMs locally
Ollama is a tool to run Large Language Models locally, without the need of a cloud service. Its usage is similar to Docker, but it's specifically designed for LLMs. You can use it as an interactive shell, through its REST API or using it from a Python library. Read more here - https://www.andreagrandi.it/posts/ollama-running-llm-locally/#:~:text=Ollama%20is%20a%20tool%20to,it%20from%20a%20Python%20library
1
4
New comment Jul 14
1 like • Apr 6
Ollama is my current go to. One curl command to install Ollama. Then pick a model, do a ```llama run <model>```. Ollama takes care of the rest. Some of the models are 70gb+ so choose your model wisely (keep an eye on the physical size of the model). I have an i7 with 16gb RAM (no GPU). So, I stick to the smaller 7-13b parameter models and can reliably run modes in the 5-7gb size range. Keep in mind, they will be SLOW compared to the APIs, but they are fun to experiment with and know they are running exclusively on your local machine. *If you have GPU, will greatly enhance the experience. I was really drawn to the LMStudio interface, but it not being open source was a no go for me.
0 likes • Jul 14
If you or someone you know previously tried using Ollama but faced challenges due to required system changes, frameworks, third-party calls, or containers, consider giving it another shot. I've uploaded some of my home-brew helpers to GitHub, specifically designed for individuals who are new to local LLMs, curious and eager to experiment, or interested in exploring the full inner workings of an AI-enhanced chat conversation interface. These tools aim to facilitate the use of local LLMs on a single workstation with a simple browser-based chat interface. You can find these helpers on GitHub here: https://github.com/aznight85048/unofficial-ollama-helpers ### To get started: 1. **Ensure you have Ollama installed with default settings and access to Python at your command line.** 2. **Run `ollama_proxy.py` from the command line.** 3. **Open `api-chat-interface.html` in your browser** to start using the chat interface. If you encounter any issues, there's an `availability_checker.html` file available in the repository. Open it in your browser to view the related ports and assist with troubleshooting. These helpers aim to simplify the process and make it more accessible for anyone wanting to experiment with local LLMs. Enjoy the journey and happy experimenting!
Large Language Models Lack True Reasoning, Claims Expert
According to Subbarao Kambhampati, a professor at Arizona State University, the recent claims that large language models (LLMs) like GPT-3, GPT-4, and ChatGPT possess reasoning and planning abilities are unfounded. Prof Kambhampati conducted experiments testing these LLMs on standard planning tasks and found their empirical performance was poor, especially when object and action names were obfuscated. While fine-tuning the models on planning data can boost performance, he argues this merely converts the task to approximate retrieval rather than true reasoning. The practice of having humans provide "chain of thought" prompting to steer LLMs is susceptible to the human unintentionally guiding the model, Kambhampati claims. He also expresses skepticism about papers claiming LLMs can self-critique and iteratively improve their own plans and reasoning. While LLMs excel at extracting general planning knowledge and generating ideas, Kambhampati found they struggle to assemble that knowledge into executable plans that properly handle subgoal interactions. Many papers making planning claims either ignore such interactions or rely on human prompting to resolve them, he says. Instead, Kambhampati proposes using LLMs to extract approximate domain models, which human experts then verify and refine before passing to traditional model-based solvers. This resembles classic knowledge-based AI systems, with LLMs replacing human knowledge engineers – while employing techniques to reason with incomplete models. Overall, the AI expert argues that despite their impressive capabilities, LLMs fundamentally lack true autonomous reasoning and planning abilities as traditionally understood. However, he believes they can productively support these tasks by combining their knowledge extraction and idea generation strengths with external solvers and human oversight. https://cacm.acm.org/blogcacm/can-llms-really-reason-and-plan/
0
2
New comment Apr 5
0 likes • Apr 5
I'm no expert. But, I believe, as models integrate various modalities, "reasoning" will be more inherent. Models may already outperform humans in logical reasoning, with their biggest weaknesses coming from the data we provide them.
Visual guide to transformers
https://youtu.be/wjZofJX0v4M?si=UV_tB_3rwz4qzJoC
4
1
New comment Apr 2
Visual guide to transformers
0 likes • Apr 2
Awesome find/share! I've spent the last couple hours watching more of his older videos. Has a really cool way of visually presenting concepts, showing where they work and their limitations. His older video about 'Gaussian + Gaussian = Gaussian' , got me thinking about possible clues on what might be going on inside some of the transformer mappings.
Bloomberg: GPT4 is racially discriminating
This is (unfortunately paywalled) interesting work by Bloomberg: They essentially created 1,000 resumes that were by and large identical in terms of experience, grades, qualifications etc and beyond that only changed the names to fall equally into 1 of 4 groups: Asians, Blacks, Hispanics, Whites, and for each of these 125 men and 125 women. They then asked GPT3.5 (and also GPT4) to rank the candidates on most to least suitable for 4 different roles: HR manager, software engineer, retail and financial analyst. They ran this 1,000 times. Open AI's GPT showed heavy racial bias, mostly significantly against black men (ranked as often as 32% as least suitable, which is an almost 2x deviation from the 'objective' 12.5%) and women, and also against white men. None of this is truly surprising by my count, as the models are mere reflections of the data on which they are trained, which of course is as biased, because they are essentially trained on the internet, which hardly serves as a heaven of unbiased equality. https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/
3
5
New comment Mar 28
0 likes • Mar 13
Guessing this is more headline/marketing than science - Sex and Race may have been the intended variables, but they also introduced more than a billion other values GPT could be using based on the their name selections. - GPT response is only as good as the prompt "find the right candidate" was an interesting choice of words. They could have tied something like this... ```enhanced prompt As an HR assistance bot, your goal is to help hiring managers find the best candidate for the job. You will evaluate resumes that are separated by <|>. Please assess each candidate against the provided job description: {job description}. To ensure fair and unbiased evaluations, please focus solely on the candidate's qualifications, experience, and skills as they relate to the job requirements. Avoid any assumptions based on personal characteristics such as gender, race, or age. Your feedback should be based solely on the information presented in the resumes. ``` - Even though the name list was somewhat random, GPT may have identified other name related information testers didn't intend to introduce (age, religion, popularity, etc). To gain some insight the prompt could be taken a step further and have GPT explain why that particular person was selected above the others. Or, run the name list though GPT and have it list most common characteristics associated with those names to get a better idea how GPT might be evaluating the listed names. Don't get me wrong, the most popular models have human biases/agendas built in some intentionally. But, if one tries, one can reduce many of those flaws.
0 likes • Mar 28
Appreciate access to the article. Would love to see Bloomberg and other companies run this test against their existing ATS selections, hiring and promotion and post their actual results on open forum on an ongoing basis. Ran a couple non-scientific tests of my own with the name set from the article and GPT 3.5 (prompt and responses included). Keep in mind every word placed in a prompt, starts a cascade through billions of parameters, this is just a few I don't believe they took into consideration. Question is...given all things being equal which one of the candidates below would YOU choose for the job? `````` first list all the common characteristics that can be determined from a name then using general trends and common usage please determine all the likely characteristics of each name ```name set christine yang joseph phan rebecca schmidt clint krueger anna vazquez guillermo hernandez tomeka gaines antwon dorsey ``` ------ second please please determine all the likely characteristics of each name `````` Response 1: Common characteristics that can be determined from a name: 1. Gender 2. Ethnicity or cultural background 3. Social class or economic status 4. Age or generational cohort 5. Possible personality traits or characteristics Likely characteristics of each name: 1. Christine Yang - Gender: Female - Ethnicity or cultural background: Asian - Social class or economic status: Middle to upper-middle class - Age or generational cohort: Likely born in the late 20th century - Possible personality traits or characteristics: Intelligent, ambitious, hardworking 2. Joseph Phan - Gender: Male - Ethnicity or cultural background: Vietnamese - Social class or economic status: Middle class - Age or generational cohort: Likely born in the late 20th century - Possible personality traits or characteristics: Friendly, reliable, family-oriented 3. Rebecca Schmidt - Gender: Female - Ethnicity or cultural background: German - Social class or economic status: Middle class
1-10 of 22
Jeff Johnson
2
8points to level up
@jeff-johnson-9911
Dedicated tech professional with a successful background in development roles. Always pushing the boundaries of what's possible.

Active 5h ago
Joined Jan 20, 2024
powered by