AutoSocials is a local-first automation scaffold for social-content workflows. The current codebase centers on three practical entrypoints:
- a menu-driven CLI launcher in
src/main.py - provider/account helpers for YouTube, Twitter, LinkedIn, and Facebook in
src/classes/ - standalone utilities in
Scripts/for preflight checks, ComfyUI image generation, and provider scaffolding
The repository targets Python 3.12.
- validates local readiness with
Scripts/preflight_checks.py - opens a simple CLI in
src/main.py - lets you create, list, select, and delete cached provider accounts for YouTube, Twitter, LinkedIn, and Facebook
- stores and reads configuration from
config.json - generates images through a ComfyUI API workflow using
Scripts/comfy_generate.py - loads editable LM prompt templates from
prompts/viasrc/prompt_loader.py
Create a virtual environment, activate it, and install the dependencies:
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip wheel
pip install -r requirements.txtThen copy the example configuration and edit it for your machine:
Copy-Item config.example.json config.jsonThe current requirements.txt includes:
wheelsetuptools<81termcolorschedulerequestsopenaiollamafaster-whisperprettytablewebsocket-clientpillowchatterbox-ttsresemble-perthpeft
Start from config.example.json and keep config.json valid JSON.
The current runtime expects a nested layout:
{
"verbose": true,
"firefox_profile": "",
"imagemagick_path": "Path to magick.exe or on linux/macOS just /usr/bin/convert",
"llm_details": {
"llm_provider": "lmstudio",
"llm_base_url": "http://127.0.0.1:11434",
"llm_api_key": "",
"default_model": "qwen/qwen3.5-9b"
},
"llm_image_details": {
"llm_image_provider": "comfyui",
"image_api_base_url": "http://127.0.0.1:8188",
"image_api_key": "",
"image_model": "Illustrious\\illustriousRealism_ilXL10V30.safetensors",
"image_aspect_ratio": "9:16"
},
"stt_details": {
"stt_provider": "local_whisper",
"whisper_model": "base",
"whisper_device": "auto"
},
"tts_details": {
"tts_provider": "chatterbox",
"tts_device": "auto",
"tts_voice_file": "Assets\\TTS_Voice.wav"
},
"youtube_details": {
"script_sentence_length": "4"
}
}verbose: enables extra logging in the launcher and helpersfirefox_profile: local Firefox profile path used by browser automation flowsimagemagick_path: optional path tomagick.exeorconvertfor subtitle/image tooling
Used for text-model provider selection and model defaults.
llm_provider:ollama,lmstudio, oropenrouterllm_base_url: local or remote base URL for the selected providerllm_api_key: API key used for OpenAI-compatible providers; required for OpenRouter (or setLLM_API_KEYin your environment)default_model: fallback model name used by the app
Used by the ComfyUI image-generation tester.
llm_image_provider: must becomfyuiimage_api_base_url: ComfyUI HTTP API base URL, for examplehttp://127.0.0.1:8188image_api_key: optional bearer token for protected endpointsimage_model: checkpoint name to inject into the workflowimage_aspect_ratio: ratio hint such as1:1,16:9, or9:16
Used for speech-to-text configuration.
stt_provider: currently expected to belocal_whisperwhisper_model: Whisper model size such asbase,small, ormediumwhisper_device: runtime target such asauto,cpu, orcuda
Used for text-to-speech configuration.
tts_provider: currentlychatterboxtts_device: runtime target such asauto,cpu, orcudatts_voice_file: path to the reference.wavvoice file
YouTube-specific generation settings.
script_sentence_length: target sentence count used by the script prompt generator
Some older helpers in src/config.py still read top-level keys directly, while the newer scripts and preflight checks use the nested config blocks above. In particular, src/config.py still expects values like firefox_profile, llm_provider, default_model, llm_endpoint, ollama_base_url, and openrouter_api_key in their legacy locations.
If you are actively using both the launcher and the newer scripts, keep that drift in mind until the legacy readers are fully unified.
Run the repository preflight script before launching the app:
python Scripts\preflight_checks.pyIt currently checks:
config.jsonexists and is valid JSONfirefox_profilepoints to an existing directory- the selected LLM provider is reachable or has the expected credentials
- Ollama is reachable when the fallback path is used
faster-whisperis importable whenstt_provider=local_whisper- optional local paths such as
imagemagick_path - ComfyUI reachability and model availability when
llm_image_provider=comfyui
src/main.py runs the primary CLI flow. When launched, it:
- prints the ASCII banner
- runs
Scripts/preflight_checks.py - ensures the local
.asfolder exists - cleans temporary files from
.as - opens the provider menu
Run it with:
python src\main.pyFrom the main menu you can start one of the provider flows or quit the app.
The current provider menu includes:
- YouTube
- Quit
Each provider uses the shared account manager in src/classes/account_menu.py to:
- list cached accounts
- create a new account
- delete an existing account
- select an account and hand it to the provider-specific controller
When you create a new account, the shared flow generates a UUID, stores the Firefox profile path from config.json, and asks for the common account fields:
- nickname
- niche
After selection, each provider opens a small provider-specific menu with:
- test service
- generate video
- upload video
- show account details
- back
At the moment, test_connection() confirms service wiring for all providers. YouTube has a partial generation pipeline, while Twitter, LinkedIn, and Facebook still keep generate_video() and upload_video() as placeholders.
Prompt text for LM-driven tasks lives in prompts/ so you can tune wording without editing Python source.
- shared templates:
prompts/common/ - provider-specific templates:
prompts/providers/<provider>/
The loader in src/prompt_loader.py supports:
load_prompt(prompt_name, provider=...)to read template filesrender_prompt(template, context={...})to inject placeholders like{niche}load_and_render_prompt(...)as a convenience wrapper
Current prompt files in this repo include:
prompts/common/generic_prompt.txt- YouTube templates under
prompts/providers/youtube/(generate_topic.txt,generate_script.txt,generate_title.txt,generate_description.txt,generate_prompts.txt,generate_video.txt)
Note: non-YouTube providers currently call generate_video via the prompt loader, but only YouTube prompt templates are included by default.
The repository includes a standalone ComfyUI test harness in Scripts/comfy_generate.py:
python Scripts\comfy_generate.py --prompt "a cinematic ruined castle at sunset"This script is intended for local ComfyUI testing outside the main app. It loads config.json, reads the llm_image_details block, edits an API-format workflow, submits it to ComfyUI, waits for completion, downloads the result, and saves the generated image locally.
By default the script uses:
- workflow file:
Assets/workflow_api.json - output base path:
output/generated.png
All paths are resolved relative to the project root, so you can run the script from different working directories.
--workflow: alternate API workflow JSON file--prompt: required prompt text to inject into the workflow--output: base output file path used when saving the rendered image--base-pixels: target long side used to derive width and height from the aspect ratio--no-show: skip opening the image after saving
Examples:
python Scripts\comfy_generate.py --prompt "a grim post-apocalyptic tower block at sunset"
python Scripts\comfy_generate.py --prompt "a cinematic forest shrine at dawn" --no-show
python Scripts\comfy_generate.py --prompt "an abandoned shopping centre in the rain" --base-pixels 1536
python Scripts\comfy_generate.py --workflow Assets\workflow_api.json --prompt "a futuristic skyline at blue hour"Scripts/comfy_generate.py currently auto-updates these common ComfyUI node types:
CheckpointLoaderSimple: replaces the checkpoint name withimage_modelCLIPTextEncode: replaces the first positive prompt node with your promptEmptyLatentImage: recalculates width and height fromimage_aspect_ratioKSampler: randomises the seed
It then:
- sends the workflow to ComfyUI with
POST /prompt - waits for completion over WebSocket
- fetches run history from
GET /history/{prompt_id} - downloads the first generated image via
GET /view - saves the output with a prompt-based timestamped filename
The saved file name looks like this:
output/a_grim_post_apocalyptic_tower_block_at_sunset_20260412_143522.png
The bundled workflow at Assets/workflow_api.json is a ComfyUI API export that matches the script’s assumptions. It currently expects the following key node types:
CheckpointLoaderSimpleEmptyLatentImage- positive and negative
CLIPTextEncodenodes KSamplerVAEDecodeSaveImage
If your workflow has a different shape, the script may still work, but only if those key nodes and input fields are present.
To use the test script, you need:
- a running ComfyUI instance
- API access enabled on that instance
- a valid API-format workflow JSON
- the checkpoint referenced by
image_modelavailable in ComfyUI
The default local endpoint is expected to be:
http://127.0.0.1:8188
Scripts/scaffold_provider.py can generate a starter provider package under src/classes/providers/<provider_slug>.
Example:
python Scripts\scaffold_provider.py tiktok
python Scripts\scaffold_provider.py bluesky --class-prefix BlueSky --display-name Bluesky --service-name "Bluesky Automator"It creates:
__init__.pycontroller.pyservice.py
The generated service class is intentionally a stub so you can wire in provider-specific browser, content, and upload logic.
src/- application code and entrypointssrc/classes/- provider-specific controllers and the shared account flowsrc/classes/providers/- provider implementations for YouTube, Twitter, LinkedIn, and FacebookScripts/- local validation utilities such aspreflight_checks.py,comfy_generate.py, andscaffold_provider.pyAssets/- static resources, including the default banner and ComfyUI workflowprompts/- editable prompt templates used by LM-related flowsoutput/- generated files and other runtime artifacts
- the launcher is menu-driven, but the overall automation flows are still scaffold-level
- only YouTube has a partial generation pipeline; Twitter, LinkedIn, and Facebook services remain placeholder stubs for generation/upload
- the config layer still has some legacy drift between nested and top-level key lookups
Scripts/comfy_generate.pyassumes a compatible ComfyUI workflow structure with the node types listed above- there is no documented end-to-end social automation workflow yet
See LICENSE for project licensing details.