It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Malnutrition is a multidomain problem affecting 54% of older adults in long-term care (LTC). Monitoring nutritional intake in LTC is laborious and subjective, limiting clinical inference capabilities. Recent advances in automatic image-based food estimation have not yet been evaluated in LTC settings. Here, we describe a fully automatic imaging system for quantifying food intake. We propose a novel deep convolutional encoder-decoder food network with depth-refinement (EDFN-D) using an RGB-D camera for quantifying a plate’s remaining food volume relative to reference portions in whole and modified texture foods. We trained and validated the network on the pre-labelled UNIMIB2016 food dataset and tested on our two novel LTC-inspired plate datasets (689 plate images, 36 unique foods). EDFN-D performed comparably to depth-refined graph cut on IOU (0.879 vs. 0.887), with intake errors well below typical 50% (mean percent intake error:
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 University of Waterloo, Waterloo, Systems Design Engineering, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405); Waterloo AI Institute, Waterloo, Canada (GRID:grid.46078.3d); Schlegel-UW Research Institute for Aging, Waterloo, Canada (GRID:grid.498777.2)
2 KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, Canada (GRID:grid.231844.8) (ISNI:0000 0004 0474 0428)
3 University of Waterloo, Waterloo, Systems Design Engineering, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405); Waterloo AI Institute, Waterloo, Canada (GRID:grid.46078.3d)
4 University of Waterloo, Waterloo, Mechanical and Mechatronics Engineering, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405)
5 Schlegel-UW Research Institute for Aging, Waterloo, Canada (GRID:grid.498777.2); University of Waterloo, Waterloo, Kinesiology and Health Studies, Waterloo, Canada (GRID:grid.46078.3d) (ISNI:0000 0000 8644 1405)