[API Request] AudioMuse-AI integration #172
Replies: 12 comments 59 replies
-
|
I also attached the swagger file: AUDIOMUSE_API.JSON that you can look on the swagger online editor There is multiple not-needed API just focus on the one described above. |
Beta Was this translation helpful? Give feedback.
-
|
LMS already has a built-in recommendation engine. Previously, it relied on low-level data provided by AcousticBrainz (a collaborative project based on extractors from the Essentia library, which unfortunately was discontinued a few years ago). Using this data, a SOM (Self-Organizing Map) was built to group songs based on acoustic similarity. The recommendation engine then used this map to find the closest songs starting from a given one (and it was also possible to find a more or less short path between two songs). All of this is to say that it’s a topic I’d like to revisit for LMS, namely, managing the extraction of low-level features from songs, in order to rebuild that SOM and reconnect the recommendation engine to it. I see that the API offered by this AudioMuse solution seems to propose something quite similar. For OpenSubsonic, we’d just need to retain the high-level concepts and let each server handle its own implementation. And here, it’s indeed the server itself that should provide the data (even if internally it relies on a third-party service like AudioMuse-AI) |
Beta Was this translation helpful? Give feedback.
-
|
Audiomuse-AI work with Librosa and Tensorflow. Not only get tag, energy and tempo, but also extract for each song a 200 size vector based on Musicnn embbeding. Is from this vector that the most useful functionality works. It started by using Essentia-Tensorflow but then switched to Librosa AND Tensorflow because it work also on ARM and is more updated. Anyway I agree, if we agree on the API signature then each server could then implement it how they want. Off course I would like is with AudioMuse-AI. |
Beta Was this translation helpful? Give feedback.
-
Open Subsonic ;) IMO the necessary public endpoints are more or less the same as Plex.
|
Beta Was this translation helpful? Give feedback.
-
|
So are we going to add at least this new API for sonic similarity and song path to open subsonic API? Do you need some extra information from me? I would really like to look at Music Server like LMS to implement it by talking with AudioMuse-AI. |
Beta Was this translation helpful? Give feedback.
-
|
Revisiting this request to ask the devs if this integration is planned for near future? I really want to get back to using Navidrome with OpenSubsonic, but, this killer feature is still pending. ;) |
Beta Was this translation helpful? Give feedback.
-
|
Some time has passed again and I'm eagerly awaiting a way to connect Navidrome with AudioMuse-AI. In my opinion, this is exactly the right step to move this forward. Are there any current updates on the development status? |
Beta Was this translation helpful? Give feedback.
-
|
Hi All, Thanks to this and to Navidrome plugin I was able to implement the AudioMsue-AI Navidrome plugin here: I'm also looking forward for @epoupon implementation on LMS! |
Beta Was this translation helpful? Give feedback.
-
|
OK, attempting to build consensus here before writing up an extension PR. How about adding an OS extension for |
Beta Was this translation helpful? Give feedback.
-
|
@opensubsonic/servers As Tolriq points out, if all current member servers support track and album Ids as arguments to Status for
|
| Server | Supports artistID (required) | Supports songID | Supports albumID | Notes |
|---|---|---|---|---|
| Navidrome | ✅ | ✅ | ✅ | Since the original release |
| LMS | ✅ | ✅ | ✅ | ref |
| Gonic | ✅ | ❌ | ❌ | ref |
| Ampache | ||||
| Astiga | ✅ | ✅ | ✅ | |
| Nextcloud Music | ✅ | ✅ | ✅ | |
| Supysonic | ❌ ? | ❌ | ❌ | doesn't support endpoint at all? |
| QM-Music |
Beta Was this translation helpful? Give feedback.
-
|
I don’t understand if finally an agreement was reached and if yes what I need to do to be supported from the different music server and client on top. Multiple people ask for me the support of Symfonium on top of Navidrome and my understanding was that he is waiting for an agreement here and probably same action from my side. |
Beta Was this translation helpful? Give feedback.
-
|
PR opened for |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Type of change
API extension
Proposal description
AudioMuse-AI is a back-end with minimal front-end that introduce Sonic Analysis OpenSource and free.
It is a contenirized application that dealing with Jellyfin, Navidrome and other subsonic like MediaServer API do:
If the first 3 are more "server side functionality", the last two will be very helpful if exposed as API from the Mediaserver to the final client because are very interactive with the final user.
Backward compatibility impact
I think this are totally new endpoint. Till now I never find Sonci Analysis implemented on Sonic Mediaserver.
Backward compatibility
API details
Below some specific example of API call and response. Installing AudioMuse-AI you can also use the interactive swagger at:
Additional example of test can be also done by Installing AudioMuse-AI and just trying the integrated front-end OR by looking at how [audiomuse-ai-plugin]((https://github.com/NeptuneHub/audiomuse-ai-plugin) remap them also.
1 - /api/analysis/start [POST]
it start the analysis task. It can be run also without any mandatory parameter to use the defualt. The task_id returned is useful to cancel the task
Response:
2 - /api/clustering/start [POST]
it start the clustering task. It can be run also without any mandatory parameter to use the defualt. The task_id returned is useful to cancel the task
Response:
2.5 - active_tasks/Cancel APi for Analysis and Cluster
/api/active_tasks [GET]
Give you a status update and a log for the task active. Is needed for Analysis and Clustering because they are async.
Response
3 - /api/sonic_fingerprint [GET]
It create a list of sonic similar song to the more frequent played music. It actually need to know user and password/api token of the user.
/api/sonic_fingerprint/generate?n=200&navidrome_user=user&navidrome_password=password
Response:
4 - /api/similar_tracks [GET]
It create a list of similar song of one song it. you can give n as number of similar song.
/api/similar_tracks?item_id=edbc325978d35662ec7420e78fad15cd&n=10&eliminate_duplicates=true
Response:
5 - /api/find_path [GET]
It return a path between two song id. You can write the max_steps or not. If not the default is 25.
http://audiomuse.192.168.3.131.nip.io:8087/api/find_path?start_song_id=f260106767eb26f4cd411f883e9fccd2&end_song_id=a40d380625abb801e33884ee5ebc66da&max_steps=10
Response:
Security impacts
The only security impact that I see is the Sonic Fingerprint that pass user and password in query string. If this functionality is seen as a security issue can be skipped or alternative you can give suggestion for a better implementation. But I would like to avoid that this single functionality become a blocker for the entire API implementation.
Potential issues
I don't see any potential issue, but if you see I'm open to work on it to solve. Thanks.
Alternative solutions
Actually AudioMuse have his minimal front-end. But this is more for testing or initial adoption. For a better user experiance will be better if this API are exposed from the Subsonic API server, and then implemented directly in the front-end that play the music.
Beta Was this translation helpful? Give feedback.
All reactions