Eric's Playground


  • Home

  • Archives

  • About

The introduction of LSTM

Posted on 2017-10-13

LSTM是一个应用广泛的算法,在RNN上提取出来,Wiki上的定义如下:

Long short-term memory (LSTM) units (or blocks) are a building unit for layers of a recurrent neural network (RNN). A RNN composed of LSTM units is often called an LSTM network. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell is responsible for “remembering” values over arbitrary time intervals; hence the word “memory” in LSTM. Each of the three gates can be thought of as a “conventional” artificial neuron, as in a multi-layer (or feedforward) neural network: that is, they compute an activation (using an activation function) of a weighted sum. Intuitively, they can be thought as regulators of the flow of values that goes through the connections of the LSTM; hence the denotation “gate”. There are connections between these gates and the cell.

The introduction of RNNs

Posted on 2017-07-19

RNN简单介绍

RNN发明于1980s,照例先看一下维基的定义:

A recurrent neural network (RNN) is a class of artificial neural network where connections between nodes form a directed graph along a sequence. This allows it to exhibit dynamic temporal behavior for a time sequence. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition[1] or speech recognition.

RNN的结构如下:

Read more »

Activation function

Posted on 2017-06-15

激活函数是我们在deep learning中至关重要的一个因子。我们先来看下Wikipedia中对activation function的定义:

In computational networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard computer chip circuit can be seen as a digital network of activation functions that can be “ON” (1) or “OFF” (0), depending on input. This is similar to the behavior of the linear perceptron in neural networks. However, only nonlinear activation functions allow such networks to compute nontrivial problems using only a small number of nodes. In artificial neural networks this function is also called the transfer function.

Read more »

How to use jupyter notebook

Posted on 2017-04-24

1.Jupyter NoteBook 安装

目前,最新版本的Anaconda是自带Jupyter NoteBook的,不需要再单独安装

2.更改Jupyter Notebook 的工作空间

2.1 直接更改配置文件

1
2
3
4
5
#in cmd
jupyter notebook --generate-config
#set c.NotebookApp.notebook_dir
c.NotebookApp.notebook_dir = 'F:\kaggle'

设置了这个之后无论在哪个cmd窗口输入 jupyter notebook 都会直接将home目录设置为
上面的dir

Read more »

Backpropagation

Posted on 2017-03-20

Backpropagation 算法在神经网络中扮演这重要作用。他的出现使得我们先计算neural network的gradients descent的时候更加有效。本文就重点来讲讲backpropagation的实现原理。

backpropagation,其实内部用到的数学知识就只用到了chain role。

pic1

Read more »

Neural Networks

Posted on 2017-03-18

这篇文章将简单介绍一下神经网络。我们先考虑有一个supervised learning的问题(),然后我们手头上有一堆带标签的训练样本,用 $(x_i,y_i)$ 来表示。
神经网络(Neural Networks)会根据这些样本进行训练,然后给出一个复杂、非线性的hypotheses $h_W,b(x)$的模型,来拟合我们的数据,其中
的W,b为我们模型的参数。

Read more »

The Java lock usage (2)

Posted on 2017-02-12

之前介绍了一下我们程序中锁的分类,这篇文章将着重讲讲,java中的锁。

Synchronized

众所周知,Java提供了synchronized字段来对代码块进行加锁。oracle给出一下解释:

You might wonder what happens when a static synchronized method is invoked, since a static method is associated with a class, not an object. In this case, the thread acquires the intrinsic lock for the Class object associated with the class. Thus access to class’s static fields is controlled by a lock that’s distinct from the lock for any instance of the class.

Read more »

The Java lock usage(1)

Posted on 2017-01-09

啥叫锁?没有找到官方的定义,搜了下百度百科,除了日常用所之外啥都没有。又搜了下对象锁,嗯。。果然
有描述。先来看看啥叫对象锁:

对象锁是指Java为临界区synchronized(Object)语句指定的对象进行加锁,对象锁是独占排他锁。

也就是说在我们Java中,我们可以通过synchronized的来进行加锁。

Read more »

如何搭建博客

Posted on 2016-11-18

本博客由hexo + github pages 搭建.

1.准备工作:
安装好git 、nmp和 hexo 环境

2.在github上创建一个github pages的blog项目。

3.在github pages的项目下,生成hexo工程

4.然后将写好的md文件放到目录blog\source_posts上

Read more »
Eric DONG

Eric DONG

9 posts
4 tags
GitHub E-Mail
Links
  • humbinal's blog
© 2018 Eric DONG
Powered by Hexo
|
Theme — NexT.Muse v5.1.4