The first two items represent the separable part, the rest stands for the correcting space. Note that if αi +βi −C =0 or γ tends to 0, then the solution will be back to traditional SVM. This happens when similarity measures in Z* are not appropriate.
Knowledge Transfer
The basic idea is teacher masters rules of mapping X* to y, which is much smaller than the rules in original space X. Students need to know how to transfer useful features in X* into X. Also, as it only requires a small fraction of the entire knowledge, it doesn’t need to work on large-sized sample set.
Reference
[1]. Vladimir, V., Akshay, V. A new learning paradigm: Learning using privileged information. Neural Networks 22(2009), 544-557. doi:10.1016/j.neunet.2009.06.042
[2]. Vladimir, V., Rauf, I. Learning Using Privileged Information: Similarity Control and Knowledge Transfer. Journal of Machine Learning Research 16(2015), 2023-2049.
For the past 4 months, I have been working on cardiovascular disease risk prediction. Through this, I come up with an idea to utilize GAN to learn in a progressive way and decide to write a paper on this topic(Sry, I can’t talk much about my idea in details). Then, I began doing background research and found three related topic. In this post, I will give summarizations of these topic.
NLP algorithms are designed to learn from language, which is usually unstructured with arbitrary length. Even worse, different language families follow different rules. Applying different sentense segmentation methods may cause ambiguity. So it is necessary to transform these information into appropriate and computer-readable representation. To enable such transformation, multiple tokenization and embedding strategies have been invented. This post is mainly for giving a brief summary of these terms. (For readers, I assume you have already known some basic concepts, like tokenization, n-gram etc. I will mainly talk about word embedding methods in this blog)
Recently, I have been working on NER projects. As a greener, I have spent a lot of time in doing research of current NER methods, and made a summarization. In this post, I will list my summaries(NER in DL), hope this could be helpful for the readers who are interested and also new in this area. Before reading, I assume you already know some basic concepts(e.g. sequential neural network, POS,IOB tagging, word embedding, conditional random field).
This post is for explaining some basic statistical concepts used in disease morbidity risk prediction. As being a CS student, I have found a hard time figuring out these statistical concepts, hope my summary would be helpful for you.
Leave a Comment