Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Automation Incubator™

Private • 16.8k • Free

AI Automation Mastery

Private • 2.1k • Free

AI Explorers

Public • 18 • Free

8 contributions to AI Explorers
🚨New tool 🚨Project Neo release
Adobe just debuted with their “Project Neo” beta allowing us to create a multitude of different variations of designs quickly natively in the adobe suite, and also convert those designs into 3D scenes. Think of it as a cracked meshy AI for Adobe. Take a few minutes to watch this video to see the potential use cases that you could implement! Keep exploring, Reece
2
2
New comment Oct 16
🚨New tool 🚨Project Neo release
🎥 Meta’s MovieGen: Raising the Bar for AI-Generated Video Content
Hey AI Explorers! 🌟 One of the most exciting AI innovations of this week is Meta’s MovieGen. This powerful tool is part of the growing trend of multimodal AI, which means it can create both synchronized audio and video from simple text prompts! Imagine generating a professional-quality video for your next marketing campaign or creative project without spending hours on editing—it’s all done with AI. Here's the landing page with example videos from Meta: https://ai.meta.com/research/movie-gen/ I don't believe that MovieGen is avalible to the public yet, but of course you can use other tools to make realistic AI videos. It's just interesting because in addition to OpenAI's Sora, creatify, and these other companies that have put their time and attention into creating AI videos, Meta has spent large amounts of resources working to gain market share as well. The 2 main things that i've seen set it apart is the video length feature & the integration of AI videos into social media. Meta's working to make their videos long form (at least a minute). Here's a quote from the whitepaper talking about the competition and how their videos are only 15 seconds: "There are a few products offering video-to-audio capabilities, including PikaLabs4 and ElevenLabs.5, but neither can really generate motion-aligned sound effects or cinematic soundtracks with both music and sound effects. PikaLabs supports sound effect generation with video and optionally text prompts; however it will generate audio longer than the video where a user needs to select an audio segment to use. This implies under the hood it may be an audio generation model conditioned on a fixed number of key image frames. The maximum audio length is capped at 15 seconds without joint music generation and audio extension capabilities, preventing its application to soundtrack creation for long-form videos. ElevenLabs leverages GPT-4o to create a sound prompt given four image frames extracted from the video (one second apart), and then generates audio using a TTA model with that prompt. Lastly, Google released a research blog6 describing their video-to-audio generation models that also provide text control. Based on the video samples, the model is capable of sound effects, speech, and music generation. However, the details (model size, training data characterization) about the model and the number of samples (13 samples with 11 distinct videos) are very limited, and no API is provided. It is difficult to conclude further details other than the model is diffusion-based and that the maximum audio length may be limited as the longest sample showcased is less than 15 seconds."
4
4
New comment Oct 16
🎥 Meta’s MovieGen: Raising the Bar for AI-Generated Video Content
1 like • Oct 14
Do you think this will be more powerful than Sora?
AI 2025... What's Next?
Really liked this analysis of Open AI's Dev Day Release yesterday. Very exciting future coming soon!
3
2
New comment Oct 14
AI 2025... What's Next?
1 like • Oct 14
I actually just used the new voice model to translate a conversation in real time with someone who only spoke creole. Super effective and useful use case ⤴️
What about ai fascinates you the most
AI is very broad, is there a specific aspect of it that you’re most interested in?
Poll
5 members have voted
0
2
New comment Oct 14
What’s Next?🤔
What new ideas, have you had, seen, or heard regarding technology/AI that isn’t mainstream yet?
1
2
New comment Oct 14
1 like • Oct 13
I just discovered that the first ever communication between two people while dreaming occurred. I’ve been thinking about how things like nueralink and eeg signal monitoring will play into dream capture and communication, memory, and eventually consciousness. Two shows I would watch to really understand what something like this: 1. altered carbon 2. upload. Dream communication article: could look like are: https://report.az/en/amp/education-and-science/scientists-claim-two-people-communicated-in-their-dreams-in-world-first/
1-8 of 8
Reece Gardner
2
11points to level up
@reece-gardner-6079
AI Enthusiast - Interdisciplinary creative - Change maker

Active 17d ago
Joined Oct 1, 2024
powered by