[JdeRobot] Fwd: Robotics Academy Project for GSoc 2020

Eduardo Perdices edupergar at gmail.com
Tue Mar 10 09:05:07 CET 2020


---------- Forwarded message ---------
From: Naman Jain <naman1205jain at gmail.com>
Date: Tue, Mar 10, 2020, 08:57
Subject: Re: Robotics Academy Project for GSoc 2020
To: <edupergar at gmail.com>, <n.arranz.agueda at gmail.com>


Hi,
I contacted you two weeks ago for queries about the computer vision GSoC
project. I understand you might have missed the mails due to your busy
schedule, I thought it best to remind you.
Specifically, my queries were:

*Queries: *I had some queries about the end goals of the project.
> Particularly, I am interested in how are the exercises to be integrated
> with ROS and cameras. While computer vision exercise can be performed
> solely on offline videos, I understand that using webcam input would be
> exciting and add more interest to them. So I wanted to ask, what kind of
> integration do you plan to use between camera input and exercises. For some
> exercises I was thinking about, it might not be possible to have real-time
> algorithms too.
>
> Secondly, in one of the computer vision exercise on FollowFace, you have
> an explicit requirement of Sony Evi dp100 camera. Would there be other
> hardware requirements for working on the project?
>

I also had some mentioned project ideas, which I would be glad if you could
comment on.

> Apart from some simple and fundamental exercises on image gradients and
> using different filters, corner detection we could add interesting
> exercises such as panorama stitching, image alignment, noise removal, image
> inpainting <https://en.wikipedia.org/wiki/Inpainting>, cartoonifying
> images (using edge detection and noise addition), style transfer (based on
> image processing techniques). We could also operate on video tasks using
> KLT Tracking, video stabilization, optical flow etc.


Thanking you in advance,
Naman


On Mon, Feb 24, 2020 at 2:57 PM Naman Jain <naman1205jain at gmail.com> wrote:

> Hi,
>
> I am a final year undergraduate studying computer science at IIT Bombay. I
> am enthusiastic about research primarily in areas of machine learning
> applied to computer vision. I will be continuing to take graduate studies
> in computer vision starting Fall'20.
>
>
> *Motivation: *I was looking at GSoC 2020 projects and came across the
> Robotics Academy project for adding new computer vision exercises and found
> it matching my interests. I believe with my strong background in computer
> vision, I would be a good fit for this position and it would also be a
> nourishing opportunity for me.
>
>
> *Previous Open Source Work: *My work on human pose estimation
> <https://github.com/Naman-ntc/Pytorch-Human-Pose-Estimation/> has been
> used in several people around the globe. I have also contributed to
> Pytorch <https://github.com/pytorch/pytorch/pull/6136>, a deep learning
> library and committed the `randint` function along with relevant
> documentation and tests. You can find my other work on my github page
> <https://github.com/Naman-ntc/>.
>
>
> *Queries: *I had some queries about the end goals of the project.
> Particularly, I am interested in how are the exercises to be integrated
> with ROS and cameras. While computer vision exercise can be performed
> solely on offline videos, I understand that using webcam input would be
> exciting and add more interest to them. So I wanted to ask, what kind of
> integration do you plan to use between camera input and exercises. For some
> exercises I was thinking about, it might not be possible to have real-time
> algorithms too.
>
> Secondly, in one of the computer vision exercise on FollowFace, you have
> an explicit requirement of Sony Evi dp100 camera. Would there be other
> hardware requirements for working on the project?
>
>
> *Ideas: *I was thinking of possible exercises to add and I was able to
> come up with many interesting ideas for the exercises based on projects I
> have completed or seen. We could add exercises on panorama stitching, image
> alignment, noise removal, image inpainting
> <https://en.wikipedia.org/wiki/Inpainting>, cartoonifying images (using
> edge detection and noise addition), style transfer (based on image
> processing techniques). We can also come up with exercises on video (KLT
> tracking, video stabilization), etc.
>
> I would think of more cases that can be used to teach computer vision.
> Please let me know what do you think about these ideas and if I should
> explore some particular direction you might have in mind.
>
>
> Looking towards a positive response.
>
>
> Thanking You,
>
> Naman Jain
>
> Final Year UG
>
> CSE, IIT Bombay
>
>
> Web: http://naman-ntc.github.io/
>
> Mail: naman1205jain at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gsyc.urjc.es/pipermail/jde-developers/attachments/20200310/8762b418/attachment-0001.html>


More information about the Jde-developers mailing list