In itself, a Daemon does not require GPU processing power. It is not an AI, but a gateway between OpenSourceAIs.com and the various AIs which will run on the same machine where the Daemon is installed.
A Daemon is personal to an OSAIS authenticated user. It acts as a CPU/GPU service provider for AIs. Any AI launched by a Daemon will pay this same Daemon for the use of CPU/GPU.
1/ Setting-up a Daemon
1.1/ Create your Daemon in OSAIS
Before you can run a Daemon on a machine, you need to create it in your profile on the OpenSourceAIs dashboard. Go to your [Profile], and then select the tab [AI Settings].
A button [Create a Daemon] will appear. If you click it, it will instantly create a Daemon and associate it to your user profile.
You may need to regenerate a secret and copy it to your clipboard.
Get the Daemon Token and Secrets, you will need them when running the Daemon on your machine.
1.2/ Download the OSAIS Daemon on your machine
We shall consider that you have configured properly wsl and docker, as per the earlier step.
For running each of the commands below, make sure you are in the wsl subsystem, on the Ubuntu OS.
An OSAIS Daemon is a docker image. It can be downloaded from the Docker repo yeepeekoo/public:ai_daemon
docker pull yeepeekoo/public:ai_daemon
The Daemon allows to process AI for free to you, and to pay you processing coming from others. It is therefore essential to set it correctly to receive credits.
1.3/ Run the Daemon
We chose to make the Daemon run on port 3333. In general, you will need a good internet connection, and a good GPU processing power, or you may be unable to run certain AIs on the machine.
To run your Daemon on your machine, go in a wsl session in terminal, and run the following command (insert your token and secret in the command line):
You can check that your Daemon is running OK by checking that it is here and by checking its logs.
// check that the Daemon is running
docker ps
// to check the logs (around 60 seconds after the Docker Run command)
docker logs ai_daemon
Here is a typical log for a running Daemon, after 1 minute of execution.
=> UNIX OS detected
--- starting Daemon ---
...requesting a Cloudflared tunnel...
=> Daemon running on: https://ai-shirt-noon-address.trycloudflare.com/
=> Daemon is authenticated into OSAIS (https://opensourceais.com/)
Cleaning all AI containers...
=> Container ai_ping was removed
=> Container ai_ping was started with ENV={'TUNNEL_DAEMON': 'https://ai-shirt-noon-address.trycloudflare.com/', 'DAEMON_COMM_UID': '2a78981f2cf8768b3f8afd300a4e9278', 'CONTAINER_NAME': 'ai_ping', 'SHUTDOWN_TIMER': 15, 'TUNNEL_OSAIS': None}
* Serving Flask app 'app_package.__main__'
* Debug mode: off
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:3333
* Running on http://172.17.0.2:3333
Press CTRL+C to quit
127.0.0.1 - - [12/Sep/2024 12:02:02] "POST /notify/ai/status HTTP/1.1" 200 -
127.0.0.1 - - [12/Sep/2024 12:02:03] "POST /notify/ai/status HTTP/1.1" 200 -
127.0.0.1 - - [12/Sep/2024 12:02:05] "POST /notify/ai/status HTTP/1.1" 200 -
1.5/ Access the Daemon
The first lines of the LOG file above contain the address of the Daemon (here is our example, https://ai-shirt-noon-address.trycloudflare.com/). You can enter this URL in a browser and directly access your Daemon.
You can also access your Daemon in private mode (as an authenticated user) by refreshing the [AI Settings] page of your [Profile] in OSAIS. If the Daemon is running, you will see a green button with the name of the machine. Click this button to access the Daemon.
The Damon itself has a basic User Interface, which shows all AIs currently running on the machine, and those not yet running, but which can be woken up. The UI also shows basic stats for the running AIs, including warmup time, transaction execution time, and transaction cost. Note that all AIs can be started or stopped via this UI.
AIs will also be automatically shutdown if they are not used for 15 minutes.
2/ Testing a first AI
The Daemon alone does not do much, apart from securing a CPU/GPU infrastructure for AIs.
2.1/ AI PING
When running a Daemon for the first time on a machine, it will download the AI PING for validating the machine and checking minimum machine capabilities.
The AI Ping does not need a GPU to work (which is not the case of most AIs, who request GPU power). It is a good test AI for validating that the configuration is setup correctly. It is always downloaded on the machine as part of the strict minimal requirement for running an AI on the OSAIS infrastructure.
You can always check that the AI Ping is running properly, by also checking its log.
docker logs ai_ping
When running properly, the logs will show something like this:
WELCOME TO OSAIS...
=> System Info
> Current version of Python is 3.8.10 (default, Nov 22 2023, 10:22:35)
[GCC 9.4.0]
> Platform version: 3.8.10
> Core AI AGENT version: v24.09.07
> argparse v1.1
> GPU uid: GPU-b8f7e6d6-8233-817a-f503-54f44e524a61
> Engine name updated to 'ping'
> Daemon url updated to 'https://camera-tender-maryland-biological.trycloudflare.com/'
> Static dir updated to './_static/'
> Template dir updated to './_templates/'
> Set S3 bucket to 'osais'
=> Logged into AWS S3
> Loaded config from /src/app/ai/config/ping.json
... requesting a Tunnel ...
=> AI running on: https://coffee-shops-tournament-admission.trycloudflare.com/
> PROD Origin OSAIS set to https://opensourceais.com/
> AI Agent ID for PROD set to 6efb5273ee6c14488bb7b809fabb4832e84fc4f6a628f23acaab7d2f8a8ee284
=> Daemon was notified of our status (initializing)
...Attempting to Login into OSAIS...
=> AI Agent is authenticated into https://opensourceais.com/
> Loaded config from /src/app/ai/config/osais.json
> We patched config of AI Agent ping into OSAIS at https://opensourceais.com/
> Client ID updated to None for PROD
<===== Config =====>
=> engine: ping v2.0.3
=> in Docker: True
=> is Debug: False
=> OSAIS: https://opensourceais.com/
=> is Local: False
=> AI Agent:
> Daemon tunnel: https://camera-tender-maryland-biological.trycloudflare.com/
> AI tunnel: https://coffee-shops-tournament-admission.trycloudflare.com/
> is connected to Daemon: True
> is connected to OSAIS: True
> AI Agent Token: 6efb5273ee6c14488bb7b809fabb4832e84fc4f6a628f23acaab7d2f8a8ee284
> is connected to LOCAL: False
=> is connected as Client: True
> client ID: 211a4b8f959a32e8866e0b826a295006baf82ebe6661f007c5bccad9ba931a22
<===== /Config =====>
=> starting a watch on path: /src/app/_output/
=> Daemon was notified of our status (initialized)
Mapped static to: ./_static/
Mapped templates to: ./_templates/
Mapped input to: /src/app/_input/
Mapped output to: /src/app/_output/
will attempt a warm up request...
=> Processing request with UID 1726142548
=> Warming up...
=> origin set to None
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:5001 (Press CTRL+C to quit)
=> before run: processed args from url: ['-width', '512', '-height', '512', '-o', 'warmup.jpg', '-filename', 'warmup.jpg', '-idir', '/src/app/_input/', '-odir', '/src/app/_output/']
=> starting a watch on path: /src/app/_output/
Namespace(hImage=512, indir='/src/app/_input/', init_image='warmup.jpg', outdir='/src/app/_output/', output='warmup.jpg', prompts=None, wImage=512, watermark=None)
> Warmup image detected, not uploading to S3, not notifying OSAIS
=> AI ready!
=> Able to process requests in 0.01 seconds
=> AI warmup took 8.14 seconds
=> AI own CPU/GPU cost calculation = 0.01 secs
=> Final cost calculation = 0.01 sec, total cost = $1.241830065359477e-08
=> AI processed 1 files
=> Daemon was notified of our status (idle)
The AI Ping has notified both OSAIS and the Daemon the of its presence, which you can see in the logs.
2.2/ Testing the AI PING
Go in the AI Marketplace, select the AI PING
Click [Use this AI!] to run a test. It will bring a UI to upload an image. Upload an image of your choice, then click [Generate!].
The PING AI does not do much, other than sending you back the same image. But it is a good test AI, to check that all works as it should, without having to configure more complex settings, or having to credit your account with $$$.
When you click, generate, OpenSourceAIs will use your own Daemon to find a machine (you may have setup your daemon on several machines, this is OK) to generate the image. You can check it in the logs on the dashboard, but it will also appear in the logs of the AI and of the Daemon itself.
Now you are setup to use the CPU computing power of your machine for free. There is one more step to go through for using the GPU.
2.3/ Pre-loading Docker images
2.4/ Adding a GPU-based AI
Most AIs, at least the most useful ones, require a GPU to process input.
As an example, we will install the AI GFPGAN, which is an image enhancer and restoration AI. We will proceed mostly as we did with the AI PING, but this time, we will configure AI GFPGAN.
Although you may let the Daemon download docker images, some images are quite heavy (20 GB or more), which would be annoying for an end-user to wait for a full download.
Therefore, it is recommended to download the image manually before the Daemon will make use of it for the first time, otherwise you may have to wait several minutes before the Daemon gets a readiness signal for the AI.
Go to your wsl terminal, and pull the docker image
docker pull yeepeekoo/public:ai_gfpgan
Alternatively, instead of a docker command in terminal, you can start the AI, for example here GFPGAN, from the UI of the Daemon (as shown above). The first run will take (a lot) longer than the expected warmup time if the docker image is not on the machine and requires a full download, but after this, the next AI start will run as per the expected warmup time.
As with Ping, let's check that it is properly running. A docker logs ai_gfpgan will show something like this:
WELCOME TO OSAIS...
=> System Info
> Current version of Python is 3.8.10 (default, Nov 22 2023, 10:22:35)
[GCC 9.4.0]
> Platform version: 3.8.10
> Core AI AGENT version: v24.09.07
> argparse v1.1
> GPU uid: GPU-b8f7e6d6-8233-817a-f503-54f44e524a61
> Engine name updated to 'gfpgan'
> Daemon url updated to 'https://camera-tender-maryland-biological.trycloudflare.com/'
> Static dir updated to './_static/'
> Template dir updated to './_templates/'
> Set S3 bucket to 'osais'
=> Logged into AWS S3
> Loaded config from /src/app/ai/config/gfpgan.json
... requesting a Tunnel ...
=> AI running on: https://im-violations-photographer-euro.trycloudflare.com/
> PROD Origin OSAIS set to https://opensourceais.com/
> AI Agent ID for PROD set to 450de3cf83ef871d390c225cbba480794976d726f14dc31f3f1833f42fdf1178
=> Daemon was notified of our status (initializing)
...Attempting to Login into OSAIS...
=> AI Agent is authenticated into https://opensourceais.com/
> Loaded config from /src/app/ai/config/osais.json
> We patched config of AI Agent gfpgan into OSAIS at https://opensourceais.com/
> Client ID updated to None for PROD
<===== Config =====>
=> engine: gfpgan v2.0.1
=> in Docker: True
=> is Debug: False
=> OSAIS: https://opensourceais.com/
=> is Local: False
=> AI Agent:
> Daemon tunnel: https://camera-tender-maryland-biological.trycloudflare.com/
> AI tunnel: https://im-violations-photographer-euro.trycloudflare.com/
> is connected to Daemon: True
> is connected to OSAIS: True
> AI Agent Token: 450de3cf83ef871d390c225cbba480794976d726f14dc31f3f1833f42fdf1178
> is connected to LOCAL: False
=> is connected as Client: True
> client ID: 211a4b8f959a32e8866e0b826a295006baf82ebe6661f007c5bccad9ba931a22
<===== /Config =====>
=> starting a watch on path: /src/app/_output/
=> Daemon was notified of our status (initialized)
Mapped static to: ./_static/
Mapped templates to: ./_templates/
Mapped input to: /src/app/_input/
Mapped output to: /src/app/_output/
will attempt a warm up request...
=> Processing request with UID 1726144666
=> Warming up...
The GFPAN AI is ready to work. You can also check it in the Daemon UI page, you will see another AI linked to your Daemon.
2.5/ Other AIs
To run other AIs, proceed in the same manner, by pre-downloading the image on the machine, and then running the AI either via the Daemon UI, or the OSAIS UI.
3/ Other useful considerations
3.1/ Launching the Daemon
The Daemon is clearing all AI docker instances at start-up, and then runs the AI PING for test and minimal capability requirement purposes. Therefore any AI instance running on the machine before a Daemon re-start, will be terminated.
3.2 Shutting down AIs
The Daemon is configured to shutdown any AI which is idle for 15 minutes. This helps keep the machine resources available and at minimum, whilst the Daemon is running but no AI is yet required.
3.3 / Stopping the Daemon
The Daemon can be stopped from the machin by running the following commands:
// stop the Daemon
docker stop ai_daemon
// remove the Daemon image before running it again (always required, even after reboot)
docker rm ai_daemon
3.4/ Stopping an AI
It is not recommended to stop an AI by running a docker stop <ai_name> command. This would desynchronise the Daemon, OSAIS, and the AI for a few seconds or minutes. If you want to shutdown an AI manually, go to the Daemon UI page, and press the red button [Stop AI]. It will close the AI via the Daemon itself, and notify OSAIS immediately.