Computer Vision based applications in support of the everyday activities of the elderly affected by dementia

Impact of dementia

Dementia is an impairment of any cognitive functions in the brain (Brown & O’Connor, 2020) and its impact is linked to age. Projections suggest that the number of patients will dramatically increase reaching between 81 million by 2040 (Rizzi et al., 2014) and 150 million people by 2050 (Brown & O’Connor, 2020) consistently with the estimation of people above 65 doubling between 2020 and 2050 (United Nation, 2015). Informal caregivers like family members may be mentally and physically affected with impacts on social and professional opportunities. The ratio between patients and caregivers is deemed to grow worldwide (WHO, 2002) and, besides the time spent by families, the cost of caring is estimated to reach US$1.2 trillion by 2030 (Brown & O’Connor, 2020).

Research goal and strategy

The goal of this study is to find what kind of applications using Computer Vision (CV) and Artificial Intelligence (AI) are being developed to support the life of the elderly with cognitive impairments and find gaps for future research and implementation of affordable solutions.

Only papers applicable to “live” computer vision and released after 2017 were considered for this review. Papers exploring the study of static medical images like MRI scans or post-elaboration of life-logging images to create memory aid applications were ignored. Recent literature reviews were used to summarise pre-2017 results.

Existing solutions for cognitively impaired patients

Evaluation of the patient

AI and CV-based applications can analyse facial expressions and infer information about a patient. For example, applications can support patients who are unable to talk (Liang et al., 2020; Zhang et al., 2020) or detect pain (Lautenbacher and Kunz, 2017; Rezaei et al. 2020) and other emotions (Chen & Picard, 2017) to provide valuable input for the caregiver and improve the patient’s life. This technology is also used to enhance existing applications, for example, memory aid and communication devices used to support caregivers receive mixed feedback (Brown & O’Connor, 2020), however, their effectiveness increases when the application can automatically evaluate the state of the patient using CV (Paletta, 2019).

Surveillance

Surveillance can certainly have applications in support of doctors and caregivers such as detecting a state of agitation in patients in an institution (Khan et al., 2022), however, it is also being used to increase the security of patients living alone. An example is offered by systems able to estimate the human pose and therefore automatically detect a fall (Ng et al., 2020). Such a system can guarantee safety and prolong the independence of patients and it is considered a solution for unobtrusive surveillance with limited privacy concerns (Marco and Farinella, 2018). The next step in surveillance is the development of systems able to identify any kind of human activity, and attempts were made using CV in combination with ontologies and wearable sensors (Meditskos et al., 2018; Stavropoulos et al. 2020), but the results are still limited.

Direct support for the patient

It is observed that most applications for dementia target the caregivers instead of the patient and there is a lack of basic functionalities like reminders, let alone advanced features that would increase the independence of the patient (Zgonec, 2021). Generally speaking, for cognitive patients, cognitive support is among the most common type of support applications, with memory aid being the most common subtype, followed by communication, orientation, reasoning, and decision-making (Ienca et al., 2017). For people with memory complaints, the number of AI-enabled memory aids is increasing with growing interest for CV, for example for facial recognition on body-worn cameras. These applications indirectly support caregivers too by reducing stress and increasing the independence of the patients (Horn et al., 2023).

Lifelogging falls under the umbrella of memory aid. It is a form of monitoring with the goal of self-consumption of the data for self-improvement (Wang et al., 2018). Although very intrusive, it is generally accepted by the patient when it provides increased confidence and a feeling of safety (Meditskos et al., 2018). The “wize mirror” is a self-monitoring device that can be integrated into objects in front of which the patient would stand and that can be part of the daily routine to seamlessly monitor the health state via CV. Although not specific to patients with dementia, it represents a methodology that could help coachable patients (Colantonio et al., 2018). A further step is to go beyond monitoring by integrating AI into devices able to interact with patients in their daily life. Examples are face recognition to notify the patient of the presence of a visitor (Rudraraju et al., 2020) or assistance in sensitive tasks without privacy invasion, like providing guidance in the toilet in a fully automated way (Lumetzberger & Kampel, 2020; Lumetzberger et al., 2021). Results are encouraging, but the problem of activity recognition is not fully resolved yet.

CV is experimented with in assistive robots to enable them to recognise patients and perform practical tasks such as delivering medicine (Vishal et al., 2017). Robots able to assist a patient are being studied with positive outcomes, but the technology is at an early stage and the acceptance among the patients varies considerably with the more senior patients refusing to interact (Fracasso et al., 2022). Since many projects did not reach a sufficient maturity level, some studies focus on the improvement of existing solutions. For example, COACH (Hoey et al., 2010) was improved by implementing a new Action Recognition System (Jean-Baptiste & Mihailidis, 2017). Visual Question Answering (VQA)

VQA applications process an image and a question in natural language, and attempt to generate a relevant answer. VQA applications are based on Convolutional Neural Networks and Recurrent Neural Networks to process the inputs, and Long Short-Term Memory to generate an appropriate output. The technology can be supported by techniques such as Graph Neural Networks, Ontologies, or contextualization derived from the image (Manmadhan & Kovoor, 2023; Vijetha & Geetha, 2023). VQA must solve many CV sub-tasks along with natural language processing demanding a multi-modal approach beyond a single domain (Manmadhan & Kovoor, 2020).

An example of VQA is Be My Eyes (Ramil Brick et al., 2021), an application designed for visually impaired people that attempts to blend facial recognition, object detection, and optical character recognition in one application (Lakhani et al., 2022). The integration with GPT-4 will empower visual assistance, providing image-to-text interpretation for people with visual impairment (Qiu et al., 2023). Surprisingly, only a limited amount of studies of VQA applied to cognitively impaired patients could be found. One of the examples is an application that can show pictures and answer patients’ questions, but it seems more supportive for the physicians (Velazquez & Lee, 2017). A possible explanation for a lack of solutions in this area could be the novelty of the approach or difficulties in the validation of a solution with patients that generally have a limited ability to give feedback

Discussion

Activity recognition

A common challenge found in several studies is the creation of an AI able to contextualise images and videos. Studies attempt to combine metadata, semantic networks, and the sequencing of images to contextualize the video content: for example, consistent identification of “indoor”, “screen” and “hands” could suggest the usage of a computer (Wang et al., 2018). The adoption of a probabilistic approach to the problem may help mitigate the uncertainty of ambiguous information (Fernando et al., 2017).

Performances and privacy are common challenges and the general solution is to reduce the amount of information in the image. For example, studies on egocentric videos suggest optimizing the recognition by focusing on the region near the hands using a change of colours and motion to detect the activity change (Zuo et al., 2018). There is no general consensus on the best strategy that also depends on the specific type of application and the type of angle of the camera (e.g. egocentric vs third person).

Privacy and cybersecurity risks

In its study on life-logging applications, ENISA (2011) pointed out the main risks of CV-based applications: a threat to privacy, loss of data with additional risks such as fraud or reputational damages, and psychological stress due to being under surveillance. There are also risks related to the incorrect management of such a pervasive data collection. Patients fear mostly the invasion of privacy and their reactions are mixed (Mihailidis et al., 2008) but they also tend to weigh privacy and perceived usefulness (Sarkisian et al., 2003). Studies try to identify systems able to assist a patient in a fully automated way thus respecting their privacy and a solution was found using depth sensors instead of RGB cameras (Lumetzberger & Kampel, 2020; Lumetzberger et al., 2021). Despite these concerns, privacy remains the least considered ethical problem analysed in studies (Ienca et al., 2018). A proposed solution attempts to extract a skeleton by estimating the human pose estimation. That application combined with semantic segmentation can hide the features in a video while still preserving rich content for any elaboration (Mishra et al., 2023).

Another concern comes from the training of large datasets required to train an AI. They can hardly be anonymized and they pose a serious privacy risk, especially because the laws are inadequate to protect the patients in these scenarios (Santosh, 2021). Compliance with laws like the GDPR is also questionable (Gerke et al. 2020). Studies try to protect the patients by removing at least the psychological information in videos (Chen & Picard, 2017) but this would not be applicable to all those applications focusing on facial expressions. Another problem is a lack of standard practices to measure cybersecurity in e-Health applications in general (Burke et al., 2019) although the potential impact can be severe, including theft of sensitive data and even death (Ksibi et al., 2022). Requirements for e-health IoT for assisted living are well defined, but their implementation is lacking and not cost-effective, although the level of concern is high (Minoli et al., 2017).

Accessibility

Understandably, patients with cognitive impairments struggle with technology even with a mild level of dementia, and, since they seek interaction with their caregivers, the technology should aim to support that relationship more than replace it. It is observed that there is a lack of studies involving cognitively impaired people due to the difficulty of interacting with them (Jönsson et al., 2019) so it is possible that improvements could still be found in the future. In terms of usability, for patients, it is easier to interact with systems with speech input in natural language (Vijetha & Geetha, 2023). Configuration possibilities are also important to find a setting that is pleasant and not overwhelming for the patient (Jönsson et al., 2019). For application design it is also important to consider that patients with dementia can use technology, but struggle with features such as passwords and alarms, and find CV-based applications more acceptable when living in institutions (Kenigsberg et al., 2019). Ethical aspects

The discussion of the ethical aspect should be part of the design phase while 67% of all assistive solutions (not limited to CV based) are developed without any consideration (Ienca et al., 2018).

It is essential to mention that AI has a bias on gender, ethnicity, and skin colour resulting in unequal responses and concerns when the AI threats a patient. This problem may be mitigated by retraining the AI with pictures of older people or patients with dementia. It is also observed that some models are better suited than others to process images of patients with dementia (Taati et al., 2019) which apparently pose an increased challenge for incorrectly trained AI. Regarding the training, however, ethical problems remain in the exploitation of users’ data, because the user’s consent or even the reliability of the data itself is often uncertain (Santosh, 2021). A particular ethical issue, although common to any AI-based technology, comes from “black box” algorithms when used to treat patients. It is unclear if the clinicians need to disclose that they do not understand AI’s criteria nor there is a clear attribution of responsibility in case of errors (Gerkeet al. 2020).

Conclusions

Existing assistive solutions can support patients with cognitive impairments, and represent an alternative to extend the patients’ autonomy while respecting their privacy, however, we are still far from holistic solutions to implement effective virtual or robot caregivers.

There is a lack of consensus on the best strategies to solve CV problems, in particular activity recognition, to offer more active support to the patient. A promising path is shown by applications such as Be My Eyes, that, although designed for visually impaired people, suggest an effective solution to implement CV-based AI able to contextualise the environment and offer valuable information to the user. It is evident that the design of a similar application targeting patients with dementia must keep into consideration the severe limitations of that type of user in terms of usability, the ethical aspects of guiding users with limited cognitive capabilities, and the privacy and cybersecurity risks to which the users would be exposed and that currently are not taken into sufficient consideration.

References

  • Brown, A., & O’Connor, S. (2020) Mobile health applications for people with dementia: a systematic review and synthesis of qualitative studies. Informatics for Health and Social Care, 45(4), 343-359. Available from https://www.pure.ed.ac.uk/ws/portalfiles/portal/145380412/BrownOConnorIHSC2020MobileHealthApplications.pdf
  • Burke, W., Oseni, T., Jolfaei, A., & Gondal, I. (2019) Cybersecurity indexes for ehealth. Proceedings of the australasian computer science week multiconference (pp. 1-8). Available from https://www.panacearesearch.eu/sites/default/files/2019-Cybersecurity%20Indexes%20for%20eHealth.pdf
  • Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017) Realtime multi-person 2d pose estimation using part affinity fields. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7291-7299). Available from https://openaccess.thecvf.com/content_cvpr_2017/papers/Cao_Realtime_Multi-Person_2D_CVPR_2017_paper.pdf
  • Chen, W., & Picard, R. W. (2017) Eliminating physiological information from facial videos. 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition. IEEE. 48-55. Available from https://dspace.mit.edu/bitstream/handle/1721.1/138002/17.Chen-etal-FG.pdf?sequence=2&isAllowed=y
  • Colantonio S., Coppini G., Giorgi D., Morales M.A., Pascal M.A. (2018) Computer vision for ambient assisted living: Monitoring systems for personalized healthcare and wellness that are robust in the real world and accepted by users, carers, and society. Computer Vision for Assistive Healthcare. Academic Press, 2018. 147-182. Available from https://openportal.isti.cnr.it/data/2018/398943/2018_398943.preprint.pdf
  • ENISA (2011) To log or not to log? Risks and benefits of emerging life-logging applications. Available from https://www.enisa.europa.eu/publications/life-logging-risk-assessment
  • Fernando Crispim-Junior, C., Vlasselaer, J., Dries, A., & Bremond, F. (2017) BEHAVE-Behavioral analysis of visual events for assisted living scenarios. Proceedings of the IEEE International Conference on Computer Vision Workshops. 1347-1353. Available from https://openaccess.thecvf.com/content_ICCV_2017workshops/papers/w22/Crispim-Junior_BEHAVE-_Behavioral_ICCV_2017_paper.pdf
  • Fracasso, F., Buchweitz, L., Theil, A., Cesta, A., & Korn, O. (2022) Social robots acceptance and marketability in Italy and Germany: a cross-national study focusing on assisted living for older adults. International Journal of Social Robotics 14(6): 1463-1480. Available from https://link.springer.com/article/10.1007/s12369-022-00884-z
  • Gerke S., Minssen T., Cohen G. (2020) Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare, Chapter 12 (295-336), Academic Press. ISBN 9780128184387. DOI https://doi.org/10.1016/B978-0-12-818438-7.00012-5.
  • Hoey, J., Poupart, P., von Bertoldi, A., Craig, T., Boutilier, C., & Mihailidis, A. (2010) Automated handwashing assistance for persons with dementia using video and a partially observable Markov decision process. Computer Vision and Image Understanding. 114(5): 503-519. Available from https://cs.brown.edu/courses/csci2951-k/papers/hoey08.pdf
  • Horn, B. L., Albers, E. A., Mitchell, L. L., Jutkowitz, E., Finlay, J. M., Millenbah, A. N. & Mikal, J. P. (2023) Can technology-based social memory aids improve social engagement? Perceptions of a novel memory aid for persons with memory concerns. Journal of Applied Gerontology. 42(3): 399-408. Available from https://journals.sagepub.com/doi/abs/10.1177/07334648221134869
  • Ienca, M., Fabrice, J., Elger, B., Caon, M., Scoccia Pappagallo, A., Kressig, R. W., & Wangmo, T. (2017) Intelligent assistive technology for Alzheimer’s disease and other dementias: a systematic review. Journal of Alzheimer’s disease. 56(4): 1301-1340. Available from https://www.researchgate.net/profile/Marcello-Ienca/publication/313461148_Intelligent_Assistive_Technology_for_Alzheimer%27s_Disease_and_Other_Dementias_A_Systematic_Review/links/5a1582dba6fdcc314923ad21/Intelligent-Assistive-Technology-for-Alzheimers-Disease-and-Other-Dementias-A-Systematic-Review.pdf
  • Ienca, M., Wangmo, T., Jotterand, F., Kressig, R. W., & Elger, B. (2018) Ethical design of intelligent assistive technologies for dementia: a descriptive review. Science and engineering ethics. 24: 1035-1055. Available from https://www.researchgate.net/profile/Marcello-Ienca/publication/319989485_Ethical_Design_of_Intelligent_Assistive_Technologies_for_Dementia_A_Descriptive_Review/links/5a056693aca2726b4c771c35/Ethical-Design-of-Intelligent-Assistive-Technologies-for-Dementia-A-Descriptive-Review.pdf
  • Jean-Baptiste, E. M., & Mihailidis, A. (2017) Benefits of automatic human action recognition in an assistive system for people with dementia. IEEE Canada international humanitarian technology conference. IEEE. 61-65. Available from https://ieeexplore.ieee.org/abstract/document/8058201
  • Jönsson, K. E., Ornstein, K., Christensen, J., & Eriksson, J. (2019) A reminder system for independence in dementia care: a case study in an assisted living facility. Proceedings of the 12th ACM international conference on pervasive technologies related to assistive environments. 176-185. Available from https://www.researchgate.net/profile/Jonas-Christensen-4/publication/333360611_A_reminder_system_for_independence_in_dementia_care_a_case_study_in_an_assisted_living_facility/links/5d1a121392851cf4405b0b9a/A-reminder-system-for-independence-in-dementia-care-a-case-study-in-an-assisted-living-facility.pdf
  • Kenigsberg, P. A., Aquino, J. P., Bérard, A., Brémond, F., Charras, K., Dening, T. & Manera, V. (2019) Assistive technologies to address capabilities of people with dementia: from research to practice. Dementia. 18(4): 1568-1595. Available from https://hal.inria.fr/hal-01850005/document
  • Khan, S. S., Mishra, P. K., Javed, N., Ye, B., Newman, K., Mihailidis, A., & Iaboni, A. (2022) Unsupervised deep learning to detect agitation from videos in people with dementia. IEEE Access. 10: 10349-10358. Available from https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9684388
  • Ksibi, S., Jaidi, F., & Bouhoula, A. (2022) A Comprehensive Study of Security and Cyber-Security Risk Management within e-Health Systems: Synthesis, Analysis and a Novel Quantified Approach. Mobile Networks and Applications. 1-21. Available from https://link.springer.com/article/10.1007/s11036-022-02042-1
  • Lakhani, N., Lakhotiya, H., & Mulla, N (2022) Be My Eyes: An Aid for the Visually Impaired. 2022 IEEE 3rd Global Conference for Advancement in Technology. IEEE 1-6. Available from https://ieeexplore.ieee.org/document/9972160
  • Lautenbacher, S., & Kunz, M. (2017) Facial pain expression in dementia: a review of the experimental and clinical evidence. Current Alzheimer Research. 14(5): 501-505. Available from https://www.researchgate.net/profile/Stefan-Lautenbacher-2/publication/304027156_Facial_Pain_Expression_in_Dementia_A_Review_of_the_Experimental_and_Clinical_Evidence/links/5817034a08aedc7d896775c1/Facial-Pain-Expression-in-Dementia-A-Review-of-the-Experimental-and-Clinical-Evidence.pdf
  • Liang, X., Angelopoulou, A., Kapetanios, E., Woll, B., Al Batat, R., & Woolfe, T. (2020) A multi-modal machine learning approach and toolkit to automate recognition of early stages of dementia among british sign language users. Computer Vision–ECCV 2020 Workshops. Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. 278-293. Springer International Publishing. Available from https://arxiv.org/pdf/2010.00536.pdf
  • Lumetzberger, J., & Kampel, M. (2020) WCBuddy: Using the toilet more autonomously via ICT. Available from https://forschung.fh-kaernten.at/aal/files/2020/05/07-Lumetzberger.pdf
  • Lumetzberger, J., Ginzinger, F., & Kampel, M. (2021) Sensor-based toilet instructions for people with dementia. Advances in Human Factors and System Interactions: Proceedings of the AHFE 2021 Virtual Conference on Human Factors and Systems Interaction, July 25-29, 2021, USA. 101-108. Springer International Publishing. Available from https://link.springer.com/chapter/10.1007/978-3-030-79816-1_13
  • Manmadhan, S., & Kovoor, B. C. (2020) Visual question answering: a state-of-the-art review. Artificial Intelligence Review. 53: 5705-5745. Available from https://link.springer.com/article/10.1007/s10462-020-09832-7
  • Manmadhan, S., & Kovoor, B. C. (2023) Object-Assisted Question Featurization and Multi-CNN Image Feature Fusion for Visual Question Answering. International Journal of Intelligent Information Technologies. 19(1): 1-19. Available from https://www.igi-global.com/gateway/article/full-text-pdf/318671&riu=true
  • Marco, L., & Farinella, G. M. (2018) Computer vision for assistive healthcare. Academic Press. Chapter 6. ISBN: 978-0-12-813445-0. Available from https://books.google.nl/books?id=vpY-DwAAQBAJ
  • Meditskos, G., Plans, P. M., Stavropoulos, T. G., Benois-Pineau, J., Buso, V., & Kompatsiaris, I. (2018) Multi-modal activity recognition from egocentric vision, semantic enrichment and lifelogging applications for the care of dementia. Journal of Visual Communication and Image Representation. 51: 169-190. Available from https://www.researchgate.net/profile/Thanos-Stavropoulos-3/publication/322848495_Multi-modal_Activity_Recognition_from_Egocentric_Vision_Semantic_Enrichment_and_Lifelogging_Applications_for_the_Care_of_Dementia/links/5b56edbfa6fdcc8dae4006af/Multi-modal-Activity-Recognition-from-Egocentric-Vision-Semantic-Enrichment-and-Lifelogging-Applications-for-the-Care-of-Dementia.pdf
  • Mihailidis, A., Cockburn, A., Longley, C., Boger, J. (2008) The Acceptability of Home Monitoring Technology Among Community-Dwelling Older Adults and Baby Boomers. Assistive Technology. 20(1): 1–12. Available from https://www.tandfonline.com/doi/abs/10.1080/10400435.2008.10131927
  • Minoli, D., Sohraby, K., & Occhiogrosso, B. (2017) Iot security (IoTsec) mechanisms for e-health and ambient assisted living applications. 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE). IEEE. 13-18. Available from https://ieeexplore.ieee.org/abstract/document/8010568
  • Mishra, P. K., Iaboni, A., Ye, B., Newman, K., Mihailidis, A., & Khan, S. S. (2023) Privacy-protecting behaviours of risk detection in people with dementia using videos. BioMedical Engineering OnLine. 22(1): 1-17. Available from https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/s12938-023-01065-3
  • Ng, K. D., Mehdizadeh, S., Iaboni, A., Mansfield, A., Flint, A., & Taati, B. (2020) Measuring gait variables using computer vision to assess mobility and fall risk in older adults with dementia. IEEE journal of translational engineering in health and medicine. 8, 1-9. Available from https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9103018
  • Paletta L., Schüssler S., Zuschnegg J., Steiner J., Pansy-Resch S., Lammer L., Prodromou D., Brunsch S., Lodron G. & Maria Fellner (2019) AMIGO - A Socially Assistive Robot for Coaching Multimodal Training of Persons with Dementia. Available from https://link.springer.com/chapter/10.1007/978-3-030-17107-0_13
  • Ramil Brick, E., Caballero Alonso, V., O’Brien, C., Tong, S., Tavernier, E., Parekh, A. & Lemon, O. (2021, October). Am i allergic to this? assisting sight impaired people in the kitchen. Proceedings of the 2021 International Conference on Multimodal Interaction. 92-102. Available from https://pure.hw.ac.uk/ws/portalfiles/portal/51289903/ICMI_no_copyright_002_.pdf
  • Rezaei, S., Moturu, A., Zhao, S., Prkachin, K. M., Hadjistavropoulos, T. & Taati, B. (2020) Unobtrusive pain monitoring in older adults with dementia using pairwise and contrastive training. IEEE Journal of Biomedical and Health Informatics. 25(5): 1450-1462. Available from https://arxiv.org/pdf/2101.03251.pdf
  • Rizzi, L., Rosset, I., & Roriz-Cruz, M. (2014) Global epidemiology of dementia: Alzheimer’s and vascular types. BioMed research international. Available from https://www.hindawi.com/journals/bmri/2014/908915/
  • Rudraraju, S. R., Suryadevara, N. K., & Negi, A. (2020) Edge computing for visitor identification using eigenfaces in an assisted living environment. Assistive Technology for the Elderly. 235-248. Academic Press. Available from https://www.sciencedirect.com/science/article/pii/B9780128185469000087
  • Sarkisian, G. P., Melenhorst, A. S., Rogers, W. A., & Fisk, A. D. (2003). Older adults’ opinions of a technology-rich home environment: conditional and unconditional device acceptance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 47(15): 1800-1804. Sage CA, Los Angeles. SAGE Publications. Available from https://journals.sagepub.com/doi/pdf/10.1177/154193120304701505
  • Santosh, K. C., Gaur, L., Santosh, K. C., & Gaur, L. (2021) Privacy, security, and ethical issues. Artificial Intelligence and Machine Learning in Public Healthcare: Opportunities and Societal Impact. 65-74. Available from https://link.springer.com/chapter/10.1007/978-981-16-6768-8_8
  • Stavropoulos, T. G., Meditskos, G., Andreadis, S., Avgerinakis, K., Adam, K., & Kompatsiaris, I. (2020) Semantic event fusion of computer vision and ambient sensor data for activity recognition to support dementia care. Journal of Ambient Intelligence and Humanized Computing. 11: 3057-3072. Available at https://www.researchgate.net/profile/Thanos-Stavropoulos-3/publication/312277005_Semantic_event_fusion_of_computer_vision_and_ambient_sensor_data_for_activity_recognition_to_support_dementia_care/links/59f9e8ef458515de05ce52b0/Semantic-event-fusion-of-computer-vision-and-ambient-sensor-data-for-activity-recognition-to-support-dementia-care.pdf
  • Qiu, J., Li, L., Sun, J., Peng, J., Shi, P., Zhang, R. & Lo, B. (2023) Large AI Models in Health Informatics: Applications, Challenges, and the Future. arXiv preprint arXiv:2303.11568. Available from https://arxiv.org/pdf/2303.11568.pdf
  • Taati, B., Zhao, S., Ashraf, A. B., Asgarian, A., Browne, M. E., Prkachin, K. M., … & Hadjistavropoulos, T. (2019) Algorithmic bias in clinical populations—evaluating and improving facial analysis technology in older adults with dementia. IEEE access. 7: 25527-25534. Available from https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8643365
  • United Nations - Department of Economic and Social Affairs (2015) Transforming our world: the 2030 Agenda for Sustainable Development. Available from https://sdgs.un.org/2030agenda
  • United Nations - Department of Economic and Social Affairs, Population Division (2020). World Population Ageing 2020 Highlights: Living arrangements of older persons. Available from https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/undesa_pd-2020_world_population_ageing_highlights.pdf
  • Velazquez, M. & Lee, Y. (2017) CVRT: Cognitive Visual Recognition Tracker. IEEE International Conference on Healthcare Informatics. IEEE. 31-38. Available from https://ieeexplore.ieee.org/abstract/document/8031129
  • Vijetha, U., & Geetha, V. (2023). Opportunities and Challenges in Development of Support System for Visually Impaired: A Survey. 13th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE. 684-690. Available from https://ieeexplore.ieee.org/abstract/document/10048861
  • Vishal, V., Gangopadhyay, S., Vivek, D. CareBot (2017) The automated caretaker system. Proceedings of the 2017 International Conference On Smart Technologies For Smart Nation (SmartTechCon) Bengaluru, India. 17–19 August 2017. Piscataway, NJ, USA, 2017. IEEE. 1333–1338. Available from https://ieeexplore.ieee.org/abstract/document/8358583
  • Wang, P., Sun, L., Smeaton, A. F., Gurrin, C. & Yang, S. (2018) Computer vision for lifelogging: Characterizing everyday activities based on visual semantics. Computer Vision for Assistive Healthcare. 249-282. Academic Press. Available from https://doras.dcu.ie/22378/1/Lifelogging_Chapter_Elsevier_Book(3).pdf
  • WHO (2002) Current and future long-term care needs: an analysis based on the 1990 WHO study, The Global Burden of Disease and the International Classification of Functioning, Disability and Health. World Health Organization, Geneva. Available from https://apps.who.int/iris/handle/10665/67349
  • Zgonec, S. (2021) Mobile apps supporting people with dementia and their carers: Literature Review and Research Agenda. IFAC-PapersOnLine. 54(13): 663-668. Available from https://www.sciencedirect.com/science/article/pii/S2405896321019650
  • Zhang, L., Arandjelović, O., Dewar, S., Astell, A., Doherty, G., & Ellis, M. (2020) Quantification of advanced dementia patients’ engagement in therapeutic sessions: An automatic video based approach using computer vision and machine learning. 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE. 5785-5788. Available from https://centaur.reading.ac.uk/90669/1/Video_Analysis_of_Advanced_Dementia_Patients__Behavioural_Engagement_Level.pdf
  • Zuo, Z., Yang, L., Peng, Y., Chao, F., & Qu, Y. (2018) Gaze-informed egocentric action recognition for memory aid systems. IEEE Access. 6: 12894-12904. Available from https://ieeexplore.ieee.org/document/8305456
Tags: CV AI