mobility_data:top
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
mobility_data:top [2022/06/30 09:34] – [Meeting Note] zhiyuanpeng | mobility_data:top [2022/10/24 09:24] (current) – [Meeting Note] zhiyuanpeng | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== 2021 Project Page ====== | + | ====== 2021-2022 Project Page ====== |
===== Mobility Data Project ===== | ===== Mobility Data Project ===== | ||
==== Meeting Note ==== | ==== Meeting Note ==== | ||
+ | 2022/10/24 | ||
+ | * Attendee: Zhiyuan, Xiangyu, Yuanbo, Yang | ||
+ | * Meeting Summary | ||
+ | Xiangyu | ||
+ | * Xiangyu shared a instance normalization method for time-series forecasting against distribution shift which published in ICLR 2022. | ||
+ | * Using this method the MSE of MLP for gaode' | ||
+ | * Next step I will implement this method in our meta-learning framewrok to see the improvements and compare the effiectiveness of our method and the normalization.[Presentation-slides: | ||
+ | * Zhiyuan | ||
+ | * Summary: | ||
+ | * this week, I conducted a series of experiments to compare our Soft-restricted MF Multi-task learning model performance with single loss trained ones. | ||
+ | * This weeks experiments reveal that the multitask loss only contributes limitedly to the improvement on the both two tasks. | ||
+ | * Moreover, the LSTM based backbone tends to predict more smoother compared to the more fluctuated data in reality. | ||
+ | * An important observation: | ||
+ | * Future Plan: | ||
+ | * Maybe next week we can try to use multi-source input data or time sequence analysis method to deal with it. | ||
+ | |||
+ | |||
2022/6/30 | 2022/6/30 | ||
* Attendee: Zhiyuan, Xiangyu, Yang | * Attendee: Zhiyuan, Xiangyu, Yang |
mobility_data/top.1656596042.txt.gz · Last modified: 2022/06/30 09:34 by zhiyuanpeng