Guangyu Chen
I earned my Ph.D. in Artificial Intelligence from Renmin University of China in 2024, specializing in digital watermarking, voice synthesis, and natural language processing.
As a freelancer developer, I've created multiple apps and web productions, securing over one million RMB in investment.
I'm now seeking AI research roles, open to any location.
x@cg-y.com
Apps
Actually, model compression is a kind of technique for developing portable deep neural networks with lower memory
and computation costs. I have done several projects in Huawei including some smartphones' applications in 2019 and
2020 (e.g. Mate 30 and Honor V30). Currently, I am leading the AdderNet project, which aims to develop a series of
deep learning models using only additions (Discussions
on Reddit).
The Vanilla Neural Architecture for the 2020s
Project Page | Paper | Discussion
on Zhihu
VanillaNet is remarkable! The concept was born from embracing the "less is more"
philosophy in computer vision. It's elegantly designed by avoiding intricate depth and operations, such as
self-attention, making it remarkably powerful yet concise. The 6-layer VanillaNet surpasses ResNet-34, and the
13-layer variant achieves about 83% Top-1 accuracy, outpacing the performance of networks with hundreds of layers,
and revealing exceptional hardware efficiency advantages.
Adder Neural Networks
Project Page | Hardware Implementation
I would like to say, AdderNet is very cool! The initial idea was came up in about
2017 when climbing with some friends at Beijing. By replacing all convolutional layers (except the first and the
last layers), we now can obtain comparable performance on ResNet architectures. In addition, to make the story
more complete, we recent release the hardware implementation and some quantization methods. The results are quite
encouraging, we can reduce both the energy consumption and thecircuit areas significantly without
affecting the performance. Now, we are working on more applications to reduce the costs of launching AI
algorithms such as low-level vision, detection, and NLP tasks.
GhostNet on MindSpore: SOTA Lightweight CV Networks
Huawei Connect (HC)
2020 | MindSpore Hub
The initial verison of GhostNet was accepted by CVPR 2020, which achieved SOTA performance on ImageNet: 75.7% top1 acc with only 226M FLOPS. In the current
version, we release a series computer vision models (e.g. int8 quantization, detection, and larger networks) on
MindsSpore 1.0 and Mate 30 Pro (Kirin 990).
AI on Ascend: Real-Time Video Style Transfer
  
Huawei Developer Conference
(HDC) 2020 | Online Demo
This project aims to develop a video style transfer system on the Huawei Atlas 200 DK AI developer
Kit. The latency of the original model for processing one image is about 630ms. After accelerating it using our method, the lantency now is about 40ms.