Frigate person recognition reddit.
- Frigate person recognition reddit Moore says some of the issues have since been patched but cannot verify that cloud data is being properly deleted. You can get double take itself up and running in like 10 minutes. Looking for recommendations. With everything set up correctly, six camera streams of 1080p might see about 5-8% CPU usage. Mine is still doing its thing for over 10 years. Facial recognition takes a ton of pixels however. If i would be able to set that confidence treshold to 75% it would save me a lot of weong tags without need for another model. A sensor is being generated, recognizing my face. jpg for facial recognition. Frigate is an NVR (network video recorder) that uses AI, specically tensorflow lite models, to track objects (people, cars, dogs, cats, etc) and alert you in a myriad of customizable ways when something "interesting" happens (person comes up to your door, for example). I have a lifesize statue of a cat on my back porch and, until I excluded an area around it, Frigate was constantly telling me it detected a cat even though the statue didn't move. I believe UI only says Pet/Person as well (Only based off my unifi doorbell), so if you wanted more granular AI recognition, Frigate. Here is an automation I am using: automation: - alias: Turn on the outside lights when a person is detected at night trigger: platform: numeric_state entity_id: sensor. What is really important for me is the object detection. I tried BlueIris a few months ago and if i remember right it needed waaaay more resources than Frigate. If you need more cameras, Frigate supports multiple Corals. Search privately. Brave is on a mission to fix the web by giving users a safer, faster and more private browsing experience, while supporting content creators through a new attention-based rewards ecosystem. Je cherche à commencer à jouer avec la reconnaissance faciale et je me demandais si double-take est ce que je devrais… Aug 30, 2023 路 Indeed no event was created, even though it seems that for an instant it realizes that I was a "person" Frigate config file. You can then trigger automations based on recognized faces and such. jpg images from Frigate’s API. If your object is smaller, it'll be harder to compare. If it detects my phone entering the home zone AND a person walking up to my door within a minute or so, it unlocks the door (and notifies me of that). For users with Frigate+ enabled, snapshots are accessible in the UI in the Frigate+ pane to allow for quick submission to the Frigate+ service. The dev just put up brand new docs for the v8 release - best tip is to start with the super simple config file and build up from there. Is Deepstack still being maintained. I meant detecting 'cars' in my cameras. Jul 23, 2024 路 recognize: # minimum face size to be recognized (pixels) min_face_size: 1000 # threshold for face recognition confidence recognition_threshold: 0. I am using high fps on front facing cameras because frigate uses snapshots from that stream and everything is blurry e. # NEED TO REMOVE THE MASKS objects: track: - person mask: 0,0,1000,0,1000,200,0,200 filters: person: min_area: 5000 max_area: 100000 min_score: 0. My aim is to keep a log of plate numbers and use this to call out new ones. I’ve updated the instructions below to reflect the latest version since there were a ton of changes. You'd need to use an add-on solution to do specific face recognition, but also be aware that camera placement can make this tough - cameras at roof height are unlikely to get enough detail (especially at night) for reliable, specific face We would like to show you a description here but the site won’t allow us. Frigate can't yet handle retention based on available disk space. You can have Frigate as a Docker container or as Home Assistant add-on. So my question is: should I use DeepStack or CompreFace? My setup is one 1080p camera, 6th gen i7, and GTX 960m. I’d always recommend Axis ip camera’s, although expensive, in my experience they are very reliable and last long. They are also accessible via the api. These images are There's an addon called Double Take that seamlessly integrates mqtt, Frigate and face recognition engine. It runs very well on a Raspberry Pi - see the docs - and with the additional of a USB Google Coral adapter (if you can get your hands on one) it will run all the object / person detection with absolutely no issues. I was thinking that I would be using that unit also for processing and the NAS connected via USB 3. Haven't seen any recent posts re Face Recognition and would appreciate any initial Ive had some okay success with BlueIris and deepstack for recognition. These images Frigate isnt facial recognition. jpg for facial recognition snapshot: 5 Frigate, downloader integration, google generative ai integration. I can definitely recommend reolink for use as a security camera, the AI person detection has been just about perfect for me, and the on camera AI chip is almost instantaneous. All processing is performed locally on your own hardware, and your camera feeds never leave your home. Just to try Frigate I set one camera, just recording clips on person detection on a Pi4 and the CPU use went up to about 80% most of the time. Nick, thank you for such detailed information. But for anyone wondering how accurate frigate is in general, and in particular for people/cars, yesterday I had some landscapers do some work around my house, with my 4 cameras running all day it triggered over 800 detections for people and cars. I'm already building home automations on top of frigate & nodered (using mqtt) and it works flawlessly! Kudos to frigate for such a great project! Now wanting to expand to automations based on Face Recognition and wondering what's the best path to take. I've minimized it through playing with the settings but as accuracy increases, the amount of missed events goes up with it. I need to install my Google Coral TPU since it eats my i5-11600 up like crazy when processing objects. One Coral USB accelerator can do real time object recognition on 6 to 7 cameras at once, so it's pretty powerful. Deepstream - object detection, face recognition. I was planning to use the new device also for 4K transcoding for PLEX, so I've found that the new Intel N100 works wonders for this purpose. @blakeblackshear @NickM-27 I am not sure if Frigate has had any consideration into implementing facial recognition into the NVR itself or not. Frigate is spot on with every single car type with the exception of USPS. Please note: car is listed twice because truck has been renamed to car by default. Etc. Yes, the video is quite laggy. Badly put together automation for a first try but it’ll be so good. I really just wanted the community to know that there are reliable Reolink options out there that can work with a very simple configuration. Dogs have been detected as persons, and the percentage is not that different (person is always around 84% while the dogs as persons 81/82%). I do all this without a coral but I have a really nice server. Any motion captured will have a high res clip recorded by frigate I have a lot of Unifi cameras, only a few of which I have installed. It is called Frigate and I’m going to demonstrate you how to setup it and how you can integrate it with Home Assistant. g blurred face, person. My frigate is often 70-71% certain it recognises a person walking around in my birdhouse. But a moving person at a distance would be easier for Frigate to detect than a non-moving person at a distance. After some research, I've found that people commonly use either DeepStack or CompreFace for face recognition. Mar 17, 2021 路 Double Take is a proxy between Frigate and any of the facial detection projects listed above. I have an O/C sensor on my front and a motion sensor outside the front door. Is there a way to "ungroup" facial recognition groups in QuMagie so that I can correct the wrong tags without changing the ones that are correct? It's almost like I need an "unlink" these people option. When using Frigate+ models, Frigate will choose the snapshot of a person object that has the largest visible face. Reolink + frigate (nvr) + deep stack (object retention / license plate) + double take (facial recognition). You probably want some sort of separate NVR so that you have a 24/7 recording as you never know when that will be useful. A rest sensor is set up for each camera. Any of these turn on the outside light. And I have enabled webrtc as far as I know, Frigate documents are like rabbit hole! 馃ぃ Do I need to use special card? Like Alex WebRTC card. I just can t seem to get this right, the picture background is sharp but the person moving is really blurry. frigate docs include some hints to make ffmpeg work with some non standard camera's, could be worth a try: Frigate saves from the stream with the record role in 10 second segments. Unfortunately, the default model was not trained on relevant camera images including images of people from the top down. I have this stack running on unRaid with a Home Assistant and the detection is incredible. 8. I've used wyze, the Samsung cam, and blink in the past. Works either after the object detection output by Frigate, or on its own. You would also probably benefit from using a decoder. These object types are frequently confused. The main attractions is it's object / person detection but this can be easily disabled in the config. Question, has anyone had success using Frigate detection to automate a light? I have an outdoor floodlight connected to a smart switch and wanted to use a frigate based camera feed and person occupancy to set things off. When I create an object mask in Frigate (Add to Person) I copy it and place it in my frigate. The training data is, I believe, based largely on generic images rather than CCTV images, so it's not so precise at differentiating between the subtleties of Now, Frigate did add some new features, like requiring motion to happen before recognizing a person to help with false positives, but I still found the higher quality models to be near bulletproof in recognition, and I chose to go that route and am still very happy with DOODS. Thanks! I'm currently using frigate. On the two outside cameras in areas where a person would be detected it like 71 or 73% probability. ALPR is separate but can be done with code project ai once frigate (or whatever) detects a license plate Admittedly I am running Frigate on a Debian 11 machine which is not my usual OS so perhaps my difficulties with getting Frigate to run could be due to my not being a Linux person. I use frigate in combination with my phone. So, all of my automations and integrations are done through Frigate. Can you elaborate on what and how you are running frigate? Imo the motion/object detection with zones and masks and all that would be the hardest part which is what frigate with a coral works best at. Frigate uses 300x300 models to compare with. As you can imagine, having a GPU does help with facial recognition though. I've found the snapshot. You can use a minimum of 10 images, but they recommend 100 images per camera. We would like to show you a description here but the site won’t allow us. Just object/person detection, but DoubleTake provides a nice, friendly interface layer between frigate and a few different face recognition tools. 0 beta release, complete with NVIDIA support. But now I have installed 3 cameras and moving to frigate. 2 to the device for storing the recordings The Coral will greatly increase your image recognition capabilities. I really love Frigate combined with it's Home Assistant capabilities. 0 has been released. Frigate is an open source NVR built around real-time AI object detection. Doorbell/Peephole camera detects movement > images are sent to Amazon Rekognition for person detection (loop of 4 until person is recognized, one per second or so) Doorbell is pressed I don't want Deep stack/Frigate running just for this, much less on a slow mini PC or a Pi All runs smooth and fast for automated and rather powerful person detection on 5 cameras 24/7, running on a 10-years old macbook-pro for an all-in-one security system with home assistant on top. I made Frigate run on my Synology 920, running both MQTT and Frigate in Docker and three camera's connected through RTSP. Is not directly supported/accelerated by Coral, but there are implementations using GPU accelerations. Now I'm using Frigate (docker) working with HA to do object detection and automation (Text-to-speech that car is coming down driveway, etc). family members) or a stranger. After trying out the new facial recognition feature, seeing it only works on the expensive AI cameras, and doesn't work that well at all (captures a small percentage of faces), I'm considering dumping Protect for something better. Since all my cameras now have their on-board AI, I use the pet and person triggers for events. I use blue iris when I want to look at footage. 2 and roll your own around the Frigate NVR. be used with Frigate with an appropriate width/height config or only object detection models? Is it possible to use two models concurrently, e. I think that in the future as Frigate develops further it could become much more suitable to how I like to have my cameras interface with me, but at present I would So I've used Deepstack (now CodeProject. update_sub_labels: true # frigate 0. As we’ll be using gpu offloading we’ll install Frigate in a seperate docker instead of running it as the HAOS add-on. Frigate is superior for object detection and effortlessly integrates with HA. Everything can run inside HA supervised as add-ons. (Person/car/dog). You'll need something like Deepstack for face recognition. Dec 13, 2020 路 EDIT 01-27-2020: Frigate 0. I moved from in-camera detection (HikVision) to Frigate and it eliminated 95% of false positives from things like birds, trees etc. Feb 12, 2024 路 I don’t think this would do face recognition, the frigate codeproject ai detector uses /v1/vision/detection but the api to do face recognition in code project ai is /v1/vision/face/recognize 22 votes, 16 comments. Effectively it is using Frigate to do the person detection using a Coral, once it identifies a person we take 3 snapshots of the camera spaced 1 second apart and save them as 3 individual files. In my setup, I would just setup an automation in homeassistant. However, birds set it off. Using a Frigate+ model with Frigate will detect face as a "sub label" of person. Needs some API where people can send public messages /upload video of that car tied to the plate. 11+ option to include names in frigate events labels: - person stop_on_match: false attempts: # number of times double take will request a frigate latest. It was now detecting people with a 95 to 99% probability. So if home zone value is zero. See the full configuration reference for an example of expanding the list of tracked objects We would like to show you a description here but the site won’t allow us. I'm setting up the holy trinity of smart home security consisting of HASS + Frigate + u/Jakowenko's Double Take. You will be able to fine tune your model with the images you have uploaded and annotated up to 12 times with your annual subscription. A single Coral outperforms most CPUs. but the system does not detect 'car'. An automation for each camera fires on motion detection in Frigate. 13). No facial recognition stuff, I dont believe in that and wouldnt want someone being able to enter my house by holding up a picture of me. Yours is a ton more efficient. If you want to build something yourself, grab an AI accelerator like the Google Coral USB or M. Also you have doubletake which is by yakowlenko, not maintained as well (it's dead). Plugged the model designation into my frigate. I also have a couple of cameras, and I also have Frigate for person recognition. # object labels that are allowed for facial recognition. And double take will only search frigate detected person snapshots for faces. backyard_person_score above: 20 condition: condition: or conditions: - condition: time after: '22:00:00' - condition: sun before: sunrise action: - service: scene You are wrong. Reply reply WWGHIAFTC The lowest cpu footprint for Frigate and Deepstack is to use a Coral as well as a dedicated GPU. Frigate can save a snapshot image to /media/frigate/clips for each object that is detected named as <camera>-<id>. video/plus/ No, just one coral for frigate. 5 threshold: 0. I was into running the object recognition on a live stream using a python script and tensorflow. When a Frigate event is received the API begins to process the snapshot. Much like Vampires can't be seen in mirrors, Cats can't be detected by image recognition due to their phase shifting ability. The payload is a call to DOODS2 referencing the debug feed of that camera in Frigate. I get only working where it says person detected, but not michael is detected for example i also use compreface with frigate and double take and a google coral I use both reolink cameras for security and frigate for person detection automations (lights). In summary, Frigate's video pipeline is a well-structured process that efficiently combines motion detection and object recognition to provide a comprehensive Posted by u/MrAnachronist - 1 vote and 3 comments the detection detects objects, not number plates and recognition works at low resolution and low frame rate, typically one uses one of the substreams - but depends on perf. Works well. My plan (may it gives you idea) is to run home automation scenes with face and object recognition - laptop + me in sun patio -> close shutters, my wife with a book turn on this I have a similar setup. Has anyone had any luck with any integration relating to number plate recognition. 8 # time (in seconds) to wait before recognizing the same person again match_timeout: 60 # time (in seconds) to wait before re-identifying a person reidentification_interval: 60 # scale factor for the I think doods and frigate use the same tensor flow models for object recognition? Frigate does add some logic for motion, but I wouldnt expect it to be miles better than doods. I cant seem to find an option in frigate to set a confidence treshold. A decoder will help with the video intake. Double-Take will take events from frigate and do faces, I just set it up over the weekend and am training faces, since its new, and Im not a great programme rI havent figured out any great automations yet, but Im well on my way. Because I don't, I use the Frigate live view card (from Frigate integration), and set the provider to "go2rtc". The solution is to feed frigate a low res stream for object detection, and set the resolution on the cropped snapshots used by compreface as high as possible. As you see in one of the attached images, in one with a guy and a dog, the dog is being recognized as a person 馃槃 Is there a way to improve person recognition other than increasing the threshold? Dec 29, 2022 路 I am using Frigate on my HA alongside Deepstack/Compreface and DoubleTake. So you would then configure Scrypted to pull the RTSP stream from Frigate rather than directly from the camera. I'm running Frigate on an NUC i5 and like 10-15 more containers without Coral and I really can't complain. Off the shelf you have the Google Nest cameras which will do face recognition well. That allows you to have a smaller image when passing it to CompreFace/Facebox which will produce quicker responses. The people that walking in pass the door therefore they re not walking that fast. everything "works" but I definitely having issues with frigate unable to keep up with the camera feed. User images and facial recognition data are being sent to the cloud without user consent, and live camera feeds can purportedly be accessed without any authentication. Never more than that. Reply reply SeraphTM Facial recognition is used to determine if a face is a known person it is trained on (e. 7 mask: 0,0,1000,0,1000,200,0,200 Also as an aside, you've set max_frames which is HIGHLY discouraged as it forcefully breaks frigate stationary object tracking and leads to undesired Thanks for chipping in u/nickm_27. ai) before with Blue Iris for object recognition. At some point I’ll write another version of this that incorporates the May 22, 2024 路 Hello, Thought I would share my node-red config if anyone is looking to setup the Google Generative AI with Frigate and notifications to Google home and phones. Now with frigate after playing quite sometimes. The Coral only does the recognition, not the decoding. When double take had enough pixels to work with, it works well and updates the frigate event with the name of the person detected. jpg. But with full respect to the Frigate contributors, the objects that it can recognize or note really useful. It needs an image of at least 250x250px to reliably recognize a face. mqtt: host: xxxxxxx port: xxxx user: xxxxxx password: xxxxxx # topics for mqtt topics: frigate: frigate/events homeassistant: homeassistant matches: double-take/matches cameras: double-take/cameras # global detect settings (default: shown below) detect: match: # save match images save: true # include base64 encoded string in api results and Frigate - motion/object detection only, Coral accelerated. Many thanks We would like to show you a description here but the site won’t allow us. 99) can handle about 10 cameras. The web UI is awesome. Frigate does object detection only. I am hoping to create an automation that checks if it’s me at the front door camera. I’m running HA as a VM on Proxmox, on a Ryzen 5 mini PC, I use Frigate with 3 rtsp cameras, recording 24/7 with audio, triggering events on person recognition with no hardware acceleration and the CPU hardly ever goes beyond 20%, usually much lower. jpg for facial recognition latest: 5 # number of times double take will request a frigate snapshot. For my use cases a 1920x1080 often is enough but if you want to get into person recognition from a distance I’d look into 4K camera’s. A USB Coral ($59. yaml and edited my minimum and threshold for objects. 2 Accelerator B/M (G650-04686-01) does it mean that the facial recognition will have more accurate results? Frigate is excellent, within the bounds of what it does. In my setup frigate night person recognition is poor (I'm use frigate 0. With a better PC (I used a mini Ryzen 4500, no need at all going so “high” spec), I run 4 cameras, recording 24/7 with audio and recording clips on person detection, and it works great. Try and experience A lightweight nextcloud alternative r/selfhosted • • Imagine no more as there is one. You need to pay the subscription and train a model using your images to get licence plate objects, same with packages. There is a workaround where you fire off an automation in the Tapo app that triggers another TP-Link device, like a plug, which in turns triggers a notification. Still works great! EDIT 12-15-2020: I just noticed that Frigate has a 0. Reply reply Plugged the model designation into my frigate. mqtt: host: *** port: *** user: Does Frigate have plate recognition on its roadmap? License plate is already supported for frigate+ models (which are slated to come out with 0. I'm using frigate with Deepstack and Double Take and it works great. So, what's good about BI, is that it could recognize a "bird" vs a "pet". This month, with the release of the GPT-4 Vision API, I was able to take my experimentation to the next level to allow a higher level of contextual understanding. I have a decent camera with frigate which will create a snap shot including the plate which is usually very clear. When the frigate/events topic is updated the API begins to process the snapshot. attempts: # number of times double take will request a frigate latest. This is yaml for one of my cameras. " Replace {{label}} in title and message of the notification with a persons name if double-take face match is detected. I am also not sure if many here are following the deve Do you have a working automation for notifications? I tried the blueprints but I can't get it to work where it says the name which match the face like. We then send those 3 files to Google AI From my understanding the object recognition models used by Frigate are Alpha/Beta and they work OK. It just happened again today. yeah it doesnt do too well with pets/animals. I'd recommend you to use compreface instead of deepstack, as its not maintained. If Frigate can call a url you can do it that way also but IDK if frigate can as I never played with that part. I worry that a lot of people read the Frigate documentation and come away thinking that Reolink cameras requ I am trying to use the Double Take facial recognition with the Frigate Notifications (SgtBatten/HA_blueprints), but not able to get it working. If you want facial recognition you can try deepstack and double take to process images after Frigate has detected a person. Jul 22, 2024 路 This article decribes setting up Frigate with Double Take and Compreface for facial recognition. However, you should be utilizing the dedicated decoder from your CPU/GPU to decode the streams. Frigate is using OpenCV and Tensorflow to perform realtime object detection for your IP cameras locally. For object recognition, whether it's Deepstack or Code Project AI, the real determining factor is which object models you are using. Double take isn't accurate or is, it's just an interface between frigate and face recognition software. Most CPU's and GPU's have decoders, but passing them to Frigate will depend on the decoder and how you are running Frigate (hopefully docker). You will need one for each of your cameras and this one starts up if no one's home. It'll obviously depend on your cameras' resolution though. labels: - person - mike. It forces the related rest sensor to update, so the call is made to DOODS2, scanning a single frame from that camera. For cars, the snapshot with the largest visible license plate will be selected. Looking at the feature list, iSpy seems to be much more powerful in this regard and even offers face recognition. Have the object detection publish to MQTT then setup BI to record based on MQTT. The config made some significant breaking changes. Posted by u/Cvalin21 - No votes and 21 comments I'd be interested in how you might use that. probably because it's on CPU right now wihle I wait for Coral cards to be available. back_lawn_person_occupancy Now I need frigate in my car with a roadcam. That being said, here's one of the automations I use for the Frigate object detection and BI recording, just to get you started alias: Frigate Person Trigger BI Record BP - Zones description: Use Frigate person detector to trigger camera recording in BlueIris trigger: - platform: state entity_id: - binary_sensor. Frigate can also do object detection really well and can offload the object detection to a Google Coral TPU. jpg images from Frigate's API. Frigate is able to use a much lower resolution because detection something large like a person doesn’t require many pixels. A Coral will free up the cpu cores which means it has more time for decoding. Now if you are just detecting 'car' as an example get a camera with one high resolution main stream (to take pictures) and one substream that meets the recognition guidelines. Let's say you have Frigate configured so that your doorbell camera would retain the last 2 days of continuous recording. These options determine which recording segments are kept for continuous recording (but can also affect tracked objects). Frigate+ has a face label so faces can be tracked and more accurately sent to face recognition services instead of guessing that a person is facing the camera, but there have been no plans discussed for Frigate+ / Frigate to host / maintain facial recognition itself. ). These images # frigate settings (default: shown below) frigate: url: # if double take should send matches back to frigate as a sub label # NOTE: requires frigate 0. Get access to custom models designed specifically for Frigate with Frigate+. You can then feed your image into a third party face recognition solution like Double Take, which then feeds back the detected name into Frigate as a sub label. You can also view thd Frigate camera but the framerate is low so better to just go to the source. Frigate doesn't do individual face recognition, but rather object recognition (cat, person, fish etc. I was able to setup Frigate but when I went to install Deepstack, their github does not look like it has been updated in 2 years. Reply reply. This is using the default prompt which can be hugely improved to suit my camera. Blue iris is a superior nvr. Or use a tensor processing unit($25 to $50) and a software like frigate to throw frames at the tpu and recognize people, plates, objects. My plan was to use: Triggered by person detection Verify the person is me (use additional security features like: Car is home, Cell Phone is home, etc…) The issue I am running with Aug 30, 2023 路 Indeed no event was created, even though it seems that for an instant it realizes that I was a "person" Because it did not score high enough. If I buy a Coral AI Google Mini PCIe M. In my case, I chose compreface and it used barely any resources. When the container starts it subscribes to Frigate’s MQTT events topic and looks for events that contain a person. Technically you can run double-take without frigate, but passing along camera configs is a lot easier with frigate. Apr 23, 2025 路 The integration of frigate person recognition allows for more precise tracking and monitoring of individuals, making it an invaluable tool for security and surveillance applications. I don't have an nvr set up outside of that, just have my cameras back up a low res stream 24/7 via ftp to be all inclusive. Deepstack shouldn't do any recognition until after person detection from the Coral. 11. These images are passed from the API to the configured detector(s) until a match is found that meets the configured requirements. Double-take and Frigate - Frigate passes the scanned faces to a locally installed copy of double-take and compares against the training pictures you've fed it. If you haven't seen the Frigate+ docs, check them out: https://docs. I still have a github issue opened on it. I’ll to. Can’t give you the finer details but it’s possible this way. The model you are using is the normal frigate model which does not have licence plate recognition. Park. But I can't even count how many times a tree has been detected as a person, or a cat as a bicycle. The Github page for the Blueprint says that it can be done. Browse privately. frigate. snapshot: 0 # process frigate images from frigate/+/person We would like to show you a description here but the site won’t allow us. The motion eye is very clever. Still identifies my large cat as a boat on one specific cam, but I guess thats the angle and Frigate includes the object labels listed below from the Google Coral test data. This aids in secondary processing such as facial and license plate recognition for person and car objects. yml file. latest: 5 # number of times double take will request a frigate snapshot. From there install the addon in ha and you can turn detection on / off for the cams you setup. Frigate also uses MQTT to talk to HomeAssistant so it can trigger Double Take is a proxy between Frigate and any of the facial detection projects listed above. Imagine no more as there is one. I've almost got more person object masks then Originally my plan was to follow Everything Smart Home's videos on setting up Frigate, then Deepstack then Double Take. 0+ update_sub_labels: false # stop the processing loop if a match is found # if set to false all image attempts will be processed before determining the best match stop_on_match: true # ignore I have never used Frigate, but the main difference that i can see is that Viseron has support for different kinds of detectors, and some better hardware acceleration (CUDA, Jetson Nano etc) Also has built in Face recognition and some other computer vision implementations Apr 6, 2023 路 I am really struggling with false detections, I have fine tuned min area sizes, confidence level percentages etc, but sadly there is no combination that works without a load of false positives (mainly at night), which I think is fair in saying is one of the main reasons many of us turned to Frigate. jpg image from Frigate produces better results and you can also crop it in real time with query parameters as long as that Frigate event is still in progress. If it's moving, a higher percentage of the pixels will be blurry, if that makes sense. Restarted frigate and immediately noticed that my detections were much more accurate. On the other hand, Reolink, with a similar monetization model, for some reason has a local person detection as an entity in Home Assistant and it works perfectly. g. When the container starts it subscribes to Frigate's MQTT events topic and looks for events that contain a person. 11 which is not the latest version) night motion sensing is a bit better. Our smart firewalls enable you to shield your business, manage kids' and employees' online activity, safely access the Internet while traveling, securely work from home, and more. It ran for a few days but the pattern (person) recognition of Frigate takes too high load on the CPU to leave room for other Docker instances like Home Assistant and Plex so I decided against it. , the built-on COCO model plus another? If all really want to do is to detect both people as with the COCO model and squirrels ;-) as with the MobileNet V2 (iNat birds), what is the simplest way to go With Frigate+, you get a model fine tuned to your cameras for improved accuracy in your specific conditions. I certainly defer to your greater experience on this topic. The camera is facing the door which is fully glassed window door so contrast wise no the best. But don't really know how the cameras might be used in conjunction with alarms? Edit: I have frigate. Hi 馃憢! After switching from Nest to Frigate and HA, I tried to replicate the package delivery notification functionality of the older camera system. It would be cool to be able to set alarms based on Frigate person detection and time of day. The best privacy online. But there again, the statue was fairly close to the I've been testing Frigate+Doubletake for facial recognition on people. USPS delivered a package and I can see the truck approach right in front of my house. jpg and latest. It only detects 'human' once the car has stopped and the person gets out. person is the only tracked object by default. Pixels are the key however. Is this the correct usage or does it need to save the area numbers within GUI as well? This is what I have for my camera under its Objects portion of the yaml code: ``` objects: track: person bear dog cat filters: person: Frigate on the other hand was designed specifically to do object detection on CCTV feeds and setting it up was pretty simple (you do have to manually write a config file unlike Shinobi but pretty much everything you need to know for that is explained in the docs and it's really easy). Firewalla is dedicated to making accessible cybersecurity solutions that are simple, affordable, and powerful. ikme rxcy yetse hue fqaygol irjyq ikm boyb jcw ljwq