柏读什么| 类风湿和风湿有什么区别| 为什么会感染幽门螺杆菌| 蒲公英长什么样| 李世民字什么| 脾肾阳虚是什么意思| moo是什么意思| 肾虚什么症状| 孕晚期羊水多了对宝宝有什么影响| 手脚脱皮是什么原因| 地软是什么| 新疆为什么天黑的晚| 乳酸脱氢酶高是什么原因| 吃什么长个子最快| 什么时候人流| 犹太人为什么不受欢迎| 十一月份属于什么星座| 口腔溃疡反复发作是什么原因| bulova是什么牌子的手表| 中二病是什么| 虎跟什么生肖相冲| 什么酒好喝| 素面朝天什么生肖| 缺锌会导致什么| 紫藤花什么时候开| 心仪什么意思| 除了肠镜还有什么方法检查肠道| 背部疼痛是什么原因引起的| 单亲妈妈是什么意思| 飞机杯什么感觉| 挑刺是什么意思| 乳腺结节吃什么| 牛蒡根泡水喝有什么好处| 兔和什么生肖最配| 什么叫双相障碍| 糖耐什么时候做| 什么是唐氏儿| 桂花什么颜色| 为什么会得霉菌性阴道炎| 毒枭是什么意思| 阳虚和阴虚有什么区别| 处女座上升星座是什么| 一点是什么时辰| 浑身没劲挂什么科| 心电图伪差是什么意思| 四大是什么| 疝气挂什么科| 八月十号是什么星座| 腋毛癣用什么药| 七岁属什么生肖| 脚面麻木是什么原因| 给朋友送什么礼物好| 8月31日什么星座| 鸡痘用什么药效果好| 劲旅是什么意思| 汕头市花是什么花| 收心是什么意思| 洗牙有什么危害吗| 肌酐高是什么原因造成的| 半夏是什么意思| 枕大池增大什么意思| 病人是什么生肖| 皮肤一碰就红是什么原因| 男人交公粮什么意思| 嘌呤是什么东西| 练字用什么笔好| 肾积水挂什么科室| 手心长痣代表什么| 男人第一次什么 感觉| 这个季节吃什么菜好| 肝肾挂什么科| 静脉曲张是什么样子| 广州五行属什么| 灏是什么意思| 老年人脸肿是什么原因引起的| 果五行属什么| 9点多是什么时辰| 胃炎吃什么食物好| 117是什么意思| 走后门什么意思| 医院信息科是做什么| 不知道叫什么名字好| 尿黄起泡是什么原因| 头晕目眩吃什么药| 5月16日是什么星座| 肝属于五行中的什么| 小便白细胞高是什么原因| 过敏性咽炎吃什么药| dq什么意思| 剖腹产什么时候可以洗澡| 风声鹤唳是什么意思| 头皮长痘痘是什么原因| 上热下寒吃什么药| 心电图能检查出什么| 尿管痒是什么原因| 桑葚泡水喝有什么好处| 上火吃什么消炎药| 为国为民是什么生肖| 痔疮出血吃什么药| 女人更年期是什么症状| 9月10号什么星座| 吃什么疏通血管最快| 有所作为的意思是什么| 蔡明是什么民族| 股癣用什么药膏好得快| 喜欢放屁是什么原因| 汗马功劳什么意思| 内心的os是什么意思| 春运是什么意思| 为什么心细的男人危险| 辅警是什么编制| 狐仙一般找什么人上身| 好马不吃回头草什么意思| 9月24日什么星座| 屁股又叫什么| 女生的下面叫什么| 花是什么意思| 耳鸣是什么原因造成的| 肝硬化是什么原因引起的| 咽炎是什么症状| 乙肝阻断针什么时候打| 副团长是什么军衔| 心率过快吃什么药好| 圣诞节送什么好| 什么东西越擦越小| 给医生送锦旗写什么| 用黄瓜敷脸有什么功效| 什么是意识| 左侧淋巴结肿大是什么原因| 外感风寒是什么意思| 相濡以沫是什么意思| 消防队属于什么编制| 红黑相间的蛇是什么蛇| 勾引是什么意思| 吃什么可以补气血| 阿佛洛狄忒是什么神| 魁元是什么意思| pass是什么意思| 鲨鱼为什么怕海豚| 吃饭吧唧嘴有什么说法| 呼吸性碱中毒吃什么药| 文员是什么| 头上的旋有什么说法| 帛字五行属什么| 会所是什么意思| 经常流鼻血是什么原因引起的| 对牛弹琴告诉我们什么道理| 什么样的云朵| 梦见自己丢钱了什么征兆| 男女之间的吸引靠什么| 霍金什么时候去世的| 头皮屑多是什么原因| 清真不吃什么肉| 小孩为什么会流鼻血| 异地结婚登记需要什么证件| 梦见洗头发是什么意思| 房性早搏吃什么药最好| 尿频去药店买什么药| 公历是什么意思| 肝火旺是什么原因引起的| 睡觉经常做梦是什么原因| 肝主筋的筋是指什么| 男命正官代表什么| 姨妈期能吃什么水果| 尿酸高饮食要注意什么| cancer是什么意思| on是什么牌子| 鳄鱼为什么会流泪| 阿奇霉素主治什么病| 缺维生素e有什么症状| 樟脑丸是干什么的| 白羊跟什么星座最配| 鸭子烧什么好吃| 女人的排卵期是什么时候| 尕尕是什么意思| 七月十六号是什么星座| 急性肠胃炎可以吃什么| 表白墙是什么| 茹毛饮血什么意思| 放屁是热的是什么原因| 杨利伟什么军衔| 痛风什么东西不能吃| 高血压吃什么助勃药好| 元曲是什么意思| 什么生长| lee什么意思| 骤雨落宿命敲什么意思| 势力是什么意思| 骏字五行属什么| 双肾盂分离是什么意思| 阳痿是什么症状| 乞丐是什么生肖| 4.25是什么星座| 5.21什么星座| 什么是养生| 天赋异禀什么意思| 棠字五行属什么| 拔节是什么意思| 女人最大的底气是什么| 尿频繁吃什么药最见效| 干咳挂什么科| 清宫后可以吃什么水果| 欧巴桑是什么意思| 胆结石忌吃什么| 郑中基为什么娶余思敏| 泉肌症是什么病| 孕妇什么情况容易早产| 剖腹产后可以吃什么| 冬枣什么时候成熟| 头皮痒用什么洗头好| 什么叫二婚线| 睡多了头疼是什么原因| 痛风有什么症状| 香五行属什么| 恩施有什么好玩的| 为什么鼻子无缘无故流鼻血| 拉黑色大便是什么原因| 鹦鹉为什么会说话| 什么是再生障碍性贫血| ups是什么快递公司| 蒲公英和什么相克致死| 唐卡是什么材料做的| 震颤是什么病| 霜和乳有什么区别| 丙肝病毒抗体阴性是什么意思| 停电了打什么电话| 下体有异味是什么原因| 三聚净戒是指什么戒| 四季春属于什么茶| 七月上旬是什么时候| 嫦娥住的宫殿叫什么| 什么是肝阳上亢| 包饺子什么意思| 暗示是什么意思| 同房是什么意思| cp是什么的缩写| 苏轼为什么反对王安石变法| ect是什么| 一到晚上就饿什么原因| 连襟是什么意思| 什么叫伪娘| 自身免疫性疾病是什么意思| 相知相惜是什么意思| 吃什么能治疗早射| 甲功四项是什么检查项目| 左侧卵巢内囊性回声是什么意思| 化验血常规能查出什么| 头皮屑特别多是什么原因| 格格不入什么意思| 胃ca是什么意思| 发烧了吃什么食物好| 头疼恶心是什么症状| 戊肝抗体igg阳性是什么意思| 泌乳是什么意思| 星期天为什么不叫星期七| 电磁波是什么| 什么的小草| 酸奶不能和什么一起吃| 奶芙是什么| 动物蛋白是什么| 悉如外人的悉是什么意思| 胃在什么地方| 雨花斋靠什么盈利| 做b超能查出什么| 百度Jump to content

淮北供电公司:“四轮驱动”防范输电线路触电事件

From Wikipedia, the free encyclopedia
The long short-term memory (LSTM) cell can process data sequentially and keep its hidden state through time.
百度   山东强调,开展农村食品安全专项整治。

Long short-term memory (LSTM)[1] is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem[2] commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps (thus "long short-term memory").[1] The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since the early 20th century.

An LSTM unit is typically composed of a cell and three gates: an input gate, an output gate,[3] and a forget gate.[4] The cell remembers values over arbitrary time intervals, and the gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state, by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 signifies retention of the information, and a value of 0 represents discarding. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output, by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps.

LSTM has wide applications in classification,[5][6] data processing, time series analysis tasks,[7] speech recognition,[8][9] machine translation,[10][11] speech activity detection,[12] robot control,[13][14] video games,[15][16] healthcare.[17]

Motivation

[edit]

In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem.[18]

The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information.[4] In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of natural language processing, the network can learn grammatical dependencies.[19] An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject Dave, note that this information is pertinent for the pronoun his and note that this information is no longer important after the verb is.

Variants

[edit]

In the equations below, the lowercase variables represent vectors. Matrices and contain, respectively, the weights of the input and recurrent connections, where the subscript can either be the input gate , output gate , the forget gate or the memory cell , depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, is not just one unit of one LSTM cell, but contains LSTM cell's units.

See [20] for an empirical study of 8 architectural variants of LSTM.

LSTM with a forget gate

[edit]

The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are:[1][4]

where the initial values are and and the operator denotes the Hadamard product (element-wise product). The subscript indexes the time step.

Variables

[edit]

Letting the superscripts and refer to the number of input features and number of hidden units, respectively:

  • : input vector to the LSTM unit
  • : forget gate's activation vector
  • : input/update gate's activation vector
  • : output gate's activation vector
  • : hidden state vector also known as output vector of the LSTM unit
  • : cell input activation vector
  • : cell state vector
  • , and : weight matrices and bias vector parameters which need to be learned during training
  • : sigmoid function.
  • : hyperbolic tangent function.
  • : hyperbolic tangent function or, as the peephole LSTM paper[21][22] suggests, .

Peephole LSTM

[edit]
A peephole LSTM unit with input (i.e. ), output (i.e. ), and forget (i.e. ) gates

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM).[21][22] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[21] is not used, is used instead in most places.

Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. and represent the activations of respectively the input, output and forget gates, at time step .

The 3 exit arrows from the memory cell to the 3 gates and represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell at time step , i.e. the contribution of (and not , as the picture may suggest). In other words, the gates and calculate their activations at time step (i.e., respectively, and ) also considering the activation of the memory cell at time step , i.e. .

The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes .

The little circles containing a symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.

Peephole convolutional LSTM

[edit]

Peephole convolutional LSTM.[23] The denotes the convolution operator.

Training

[edit]

An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.

A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to if the spectral radius of is smaller than 1.[2][24]

However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.

CTC score function

[edit]

Many applications use stacks of LSTM RNNs[25] and train them by connectionist temporal classification (CTC)[5] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.

Alternatives

[edit]

Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution[7] or by policy gradient methods, especially when there is no "teacher" (that is, training labels).

Applications

[edit]

Applications of LSTM include:

2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice.[50][51] According to the official blog post, the new model cut transcription errors by 49%.[52]

2016: Google started using an LSTM to suggest messages in the Allo conversation app.[53] In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.[10][54][55]

Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype[56][57][58] in the iPhone and for Siri.[59][60]

Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.[61]

2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.[11]

Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[62]

2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2,[15] and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[14][63]

2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II.[16][63]

History

[edit]

Development

[edit]

Aspects of LSTM were anticipated by "focused back-propagation" (Mozer, 1989),[64] cited by the LSTM paper.[1]

Sepp Hochreiter's 1991 German diploma thesis analyzed the vanishing gradient problem and developed principles of the method.[2] His supervisor, Jürgen Schmidhuber, considered the thesis highly significant.[65]

An early version of LSTM was published in 1995 in a technical report by Sepp Hochreiter and Jürgen Schmidhuber,[66] then published in the NIPS 1996 conference.[3]

The most commonly used reference point for LSTM was published in 1997 in the journal Neural Computation.[1] By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.[20]

(Felix Gers, Jürgen Schmidhuber, and Fred Cummins, 1999)[67] introduced the forget gate (also called "keep gate") into the LSTM architecture in 1999, enabling the LSTM to reset its own state.[20] This is the most commonly used version of LSTM nowadays.

(Gers, Schmidhuber, and Cummins, 2000) added peephole connections.[21][22] Additionally, the output activation function was omitted.[20]

Development of variants

[edit]

(Graves, Fernandez, Gomez, and Schmidhuber, 2006)[5] introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences.

(Graves, Schmidhuber, 2005)[26] published LSTM with full backpropagation through time and bidirectional LSTM.

(Kyunghyun Cho et al., 2014)[68] published a simplified variant of the forget gate LSTM[67] called Gated recurrent unit (GRU).

(Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber, 2015) used LSTM principles[67] to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks.[69][70][71] Concurrently, the ResNet architecture was developed. It is equivalent to an open-gated or gateless highway network.[72]

A modern upgrade of LSTM called xLSTM is published by a team led by Sepp Hochreiter (Maximilian et al, 2024).[73][74] One of the 2 blocks (mLSTM) of the architecture are parallelizable like the Transformer architecture, the other ones (sLSTM) allow state tracking.

Applications

[edit]

2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models.[21][63]

Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm).[75]

2004: First successful application of LSTM to speech Alex Graves et al.[76][63]

2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher.[7]

Mayer et al. trained LSTM to control robots.[13]

2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher.[77]

Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology.[37]

2009: Justin Bayer et al. introduced neural architecture search for LSTM.[78][63]

2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves.[79] One was the most accurate model in the competition and another was the fastest.[80] This was the first time an RNN won international competitions.[63]

2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.[28]

2017: Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference.[81] Their time-aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.

See also

[edit]

References

[edit]
  1. ^ a b c d e Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. S2CID 1915014.
  2. ^ a b c Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (diploma thesis). Technical University Munich, Institute of Computer Science.
  3. ^ a b Hochreiter, Sepp; Schmidhuber, Jürgen (2025-08-07). "LSTM can solve hard long time lag problems". Proceedings of the 9th International Conference on Neural Information Processing Systems. NIPS'96. Cambridge, MA, USA: MIT Press: 473–479.
  4. ^ a b c Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation. 12 (10): 2451–2471. CiteSeerX 10.1.1.55.5709. doi:10.1162/089976600300015015. PMID 11032042. S2CID 11598600.
  5. ^ a b c Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Jürgen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX 10.1.1.75.6306.
  6. ^ Karim, Fazle; Majumdar, Somshubra; Darabi, Houshang; Chen, Shun (2018). "LSTM Fully Convolutional Networks for Time Series Classification". IEEE Access. 6: 1662–1669. arXiv:1709.05206. Bibcode:2018IEEEA...6.1662K. doi:10.1109/ACCESS.2017.2779939. ISSN 2169-3536.
  7. ^ a b c d Wierstra, Daan; Schmidhuber, J.; Gomez, F. J. (2005). "Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning". Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh: 853–858.
  8. ^ Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). "Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling" (PDF). Archived from the original (PDF) on 2025-08-07.
  9. ^ Li, Xiangang; Wu, Xihong (2025-08-07). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv:1410.4281 [cs.CL].
  10. ^ a b Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin (2025-08-07). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL].
  11. ^ a b Ong, Thuy (4 August 2017). "Facebook's translations are now powered completely by AI". www.allthingsdistributed.com. Retrieved 2025-08-07.
  12. ^ Sahidullah, Md; Patino, Jose; Cornell, Samuele; Yin, Ruiking; Sivasankaran, Sunit; Bredin, Herve; Korshunov, Pavel; Brutti, Alessio; Serizel, Romain; Vincent, Emmanuel; Evans, Nicholas; Marcel, Sebastien; Squartini, Stefano; Barras, Claude (2025-08-07). "The Speed Submission to DIHARD II: Contributions & Lessons Learned". arXiv:1911.02388 [eess.AS].
  13. ^ a b c Mayer, H.; Gomez, F.; Wierstra, D.; Nagy, I.; Knoll, A.; Schmidhuber, J. (October 2006). "A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks". 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 543–548. CiteSeerX 10.1.1.218.3399. doi:10.1109/IROS.2006.282190. ISBN 978-1-4244-0258-8. S2CID 12284900.
  14. ^ a b "Learning Dexterity". OpenAI. July 30, 2018. Retrieved 2025-08-07.
  15. ^ a b Rodriguez, Jesus (July 2, 2018). "The Science Behind OpenAI Five that just Produced One of the Greatest Breakthrough in the History of AI". Towards Data Science. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  16. ^ a b Stanford, Stacy (January 25, 2019). "DeepMind's AI, AlphaStar Showcases Significant Progress Towards AGI". Medium ML Memoirs. Retrieved 2025-08-07.
  17. ^ Schmidhuber, Jürgen (2021). "The 2010s: Our Decade of Deep Learning / Outlook on the 2020s". AI Blog. IDSIA, Switzerland. Retrieved 2025-08-07.
  18. ^ Calin, Ovidiu (14 February 2020). Deep Learning Architectures. Cham, Switzerland: Springer Nature. p. 555. ISBN 978-3-030-36720-6.
  19. ^ Lakretz, Yair; Kruszewski, German; Desbordes, Theo; Hupkes, Dieuwke; Dehaene, Stanislas; Baroni, Marco (2019), "The emergence of number and syntax units in", The emergence of number and syntax units (PDF), Association for Computational Linguistics, pp. 11–20, doi:10.18653/v1/N19-1002, hdl:11245.1/16cb6800-e10d-4166-8e0b-fed61ca6ebb4, S2CID 81978369
  20. ^ a b c d Klaus Greff; Rupesh Kumar Srivastava; Jan Koutník; Bas R. Steunebrink; Jürgen Schmidhuber (2015). "LSTM: A Search Space Odyssey". IEEE Transactions on Neural Networks and Learning Systems. 28 (10): 2222–2232. arXiv:1503.04069. Bibcode:2015arXiv150304069G. doi:10.1109/TNNLS.2016.2582924. PMID 27411231. S2CID 3356463.
  21. ^ a b c d e f Gers, F. A.; Schmidhuber, J. (2001). "LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages" (PDF). IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769. PMID 18249962. S2CID 10192330. Archived from the original (PDF) on 2025-08-07.
  22. ^ a b c d Gers, F.; Schraudolph, N.; Schmidhuber, J. (2002). "Learning precise timing with LSTM recurrent networks" (PDF). Journal of Machine Learning Research. 3: 115–143.
  23. ^ Xingjian Shi; Zhourong Chen; Hao Wang; Dit-Yan Yeung; Wai-kin Wong; Wang-chun Woo (2015). "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting". Proceedings of the 28th International Conference on Neural Information Processing Systems: 802–810. arXiv:1506.04214. Bibcode:2015arXiv150604214S.
  24. ^ Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. (2001). "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies (PDF Download Available)". In Kremer and, S. C.; Kolen, J. F. (eds.). A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press.
  25. ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "Sequence labelling in structured domains with hierarchical recurrent neural networks". Proc. 20th Int. Joint Conf. On Artificial Intelligence, Ijcai 2007: 774–779. CiteSeerX 10.1.1.79.1887.
  26. ^ a b Graves, A.; Schmidhuber, J. (2005). "Framewise phoneme classification with bidirectional LSTM and other neural network architectures". Neural Networks. 18 (5–6): 602–610. CiteSeerX 10.1.1.331.5800. doi:10.1016/j.neunet.2005.06.042. PMID 16112549. S2CID 1856462.
  27. ^ Fernández, S.; Graves, A.; Schmidhuber, J. (9 September 2007). "An Application of Recurrent Neural Networks to Discriminative Keyword Spotting". Proceedings of the 17th International Conference on Artificial Neural Networks. ICANN'07. Berlin, Heidelberg: Springer-Verlag: 220–229. ISBN 978-3540746935. Retrieved 28 December 2023.
  28. ^ a b Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013). "Speech recognition with deep recurrent neural networks". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 6645–6649. arXiv:1303.5778. doi:10.1109/ICASSP.2013.6638947. ISBN 978-1-4799-0356-6. S2CID 206741496.
  29. ^ Kratzert, Frederik; Klotz, Daniel; Shalev, Guy; Klambauer, Günter; Hochreiter, Sepp; Nearing, Grey (2025-08-07). "Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets". Hydrology and Earth System Sciences. 23 (12): 5089–5110. arXiv:1907.08456. Bibcode:2019HESS...23.5089K. doi:10.5194/hess-23-5089-2019. ISSN 1027-5606.
  30. ^ Eck, Douglas; Schmidhuber, Jürgen (2025-08-07). "Learning the Long-Term Structure of the Blues". Artificial Neural Networks — ICANN 2002. Lecture Notes in Computer Science. Vol. 2415. Springer, Berlin, Heidelberg. pp. 284–289. CiteSeerX 10.1.1.116.3620. doi:10.1007/3-540-46084-5_47. ISBN 978-3540460848.
  31. ^ Schmidhuber, J.; Gers, F.; Eck, D.; Schmidhuber, J.; Gers, F. (2002). "Learning nonregular languages: A comparison of simple recurrent networks and LSTM". Neural Computation. 14 (9): 2039–2041. CiteSeerX 10.1.1.11.7369. doi:10.1162/089976602320263980. PMID 12184841. S2CID 30459046.
  32. ^ Perez-Ortiz, J. A.; Gers, F. A.; Eck, D.; Schmidhuber, J. (2003). "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets". Neural Networks. 16 (2): 241–250. CiteSeerX 10.1.1.381.1992. doi:10.1016/s0893-6080(02)00219-8. PMID 12628609.
  33. ^ A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
  34. ^ Graves, A.; Fernández, S.; Liwicki, M.; Bunke, H.; Schmidhuber, J. (3 December 2007). "Unconstrained Online Handwriting Recognition with Recurrent Neural Networks". Proceedings of the 20th International Conference on Neural Information Processing Systems. NIPS'07. USA: Curran Associates Inc.: 577–584. ISBN 9781605603520. Retrieved 28 December 2023.
  35. ^ Baccouche, M.; Mamalet, F.; Wolf, C.; Garcia, C.; Baskurt, A. (2011). "Sequential Deep Learning for Human Action Recognition". In Salah, A. A.; Lepri, B. (eds.). 2nd International Workshop on Human Behavior Understanding (HBU). Lecture Notes in Computer Science. Vol. 7065. Amsterdam, Netherlands: Springer. pp. 29–39. doi:10.1007/978-3-642-25446-8_4. ISBN 978-3-642-25445-1.
  36. ^ Huang, Jie; Zhou, Wengang; Zhang, Qilin; Li, Houqiang; Li, Weiping (2025-08-07). "Video-based Sign Language Recognition without Temporal Segmentation". arXiv:1801.10111 [cs.CV].
  37. ^ a b Hochreiter, S.; Heusel, M.; Obermayer, K. (2007). "Fast model-based protein homology detection without alignment". Bioinformatics. 23 (14): 1728–1736. doi:10.1093/bioinformatics/btm247. PMID 17488755.
  38. ^ Thireou, T.; Reczko, M. (2007). "Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. PMID 17666763. S2CID 11787259.
  39. ^ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). "Long Short Term Memory Networks for Anomaly Detection in Time Series" (PDF). European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015. Archived from the original (PDF) on 2025-08-07. Retrieved 2025-08-07.
  40. ^ Tax, N.; Verenich, I.; La Rosa, M.; Dumas, M. (2017). "Predictive Business Process Monitoring with LSTM Neural Networks". Advanced Information Systems Engineering. Lecture Notes in Computer Science. Vol. 10253. pp. 477–492. arXiv:1612.02130. doi:10.1007/978-3-319-59536-8_30. ISBN 978-3-319-59535-1. S2CID 2192354.
  41. ^ Choi, E.; Bahadori, M.T.; Schuetz, E.; Stewart, W.; Sun, J. (2016). "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks". JMLR Workshop and Conference Proceedings. 56: 301–318. arXiv:1511.05942. Bibcode:2015arXiv151105942C. PMC 5341604. PMID 28286600.
  42. ^ Jia, Robin; Liang, Percy (2016). "Data Recombination for Neural Semantic Parsing". arXiv:1606.03622 [cs.CL].
  43. ^ Wang, Le; Duan, Xuhuan; Zhang, Qilin; Niu, Zhenxing; Hua, Gang; Zheng, Nanning (2025-08-07). "Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation" (PDF). Sensors. 18 (5): 1657. Bibcode:2018Senso..18.1657W. doi:10.3390/s18051657. ISSN 1424-8220. PMC 5982167. PMID 29789447.
  44. ^ Duan, Xuhuan; Wang, Le; Zhai, Changbo; Zheng, Nanning; Zhang, Qilin; Niu, Zhenxing; Hua, Gang (2018). "Joint Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation". 2018 25th IEEE International Conference on Image Processing (ICIP). 25th IEEE International Conference on Image Processing (ICIP). pp. 918–922. doi:10.1109/icip.2018.8451692. ISBN 978-1-4799-7061-2.
  45. ^ Orsini, F.; Gastaldi, M.; Mantecchini, L.; Rossi, R. (2019). Neural networks trained with WiFi traces to predict airport passenger behavior. 6th International Conference on Models and Technologies for Intelligent Transportation Systems. Krakow: IEEE. arXiv:1910.14026. doi:10.1109/MTITS.2019.8883365. 8883365.
  46. ^ Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.Y.; Liu, J. (2017). "LSTM network: A deep learning approach for Short-term traffic forecast". IET Intelligent Transport Systems. 11 (2): 68–75. doi:10.1049/iet-its.2016.0208. S2CID 114567527.
  47. ^ Gupta A, Müller AT, Huisman BJH, Fuchs JA, Schneider P, Schneider G (2018). "Generative Recurrent Networks for De Novo Drug Design". Mol Inform. 37 (1–2). doi:10.1002/minf.201700111. PMC 5836943. PMID 29095571.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  48. ^ Saiful Islam, Md.; Hossain, Emam (2025-08-07). "Foreign Exchange Currency Rate Prediction using a GRU-LSTM Hybrid Network". Soft Computing Letters. 3: 100009. doi:10.1016/j.socl.2020.100009. ISSN 2666-2221.
  49. ^ Martin, Abbey; Hill, Andrew J.; Seiler, Konstantin M.; Balamurali, Mehala (2025-08-07). "Automatic excavator action recognition and localisation for untrimmed video using hybrid LSTM-Transformer networks". International Journal of Mining, Reclamation and Environment. 38 (5): 353–372. doi:10.1080/17480930.2023.2290364. ISSN 1748-0930.
  50. ^ Beaufays, Fran?oise (August 11, 2015). "The neural networks behind Google Voice transcription". Research Blog. Retrieved 2025-08-07.
  51. ^ Sak, Ha?im; Senior, Andrew; Rao, Kanishka; Beaufays, Fran?oise; Schalkwyk, Johan (September 24, 2015). "Google voice search: faster and more accurate". Research Blog. Retrieved 2025-08-07.
  52. ^ "Neon prescription... or rather, New transcription for Google Voice". Official Google Blog. 23 July 2015. Retrieved 2025-08-07.
  53. ^ Khaitan, Pranav (May 18, 2016). "Chat Smarter with Allo". Research Blog. Retrieved 2025-08-07.
  54. ^ Metz, Cade (September 27, 2016). "An Infusion of AI Makes Google Translate More Powerful Than Ever | WIRED". Wired. Retrieved 2025-08-07.
  55. ^ "A Neural Network for Machine Translation, at Production Scale". Google AI Blog. 27 September 2016. Retrieved 2025-08-07.
  56. ^ Efrati, Amir (June 13, 2016). "Apple's Machines Can Learn Too". The Information. Retrieved 2025-08-07.
  57. ^ Ranger, Steve (June 14, 2016). "iPhone, AI and big data: Here's how Apple plans to protect your privacy". ZDNet. Retrieved 2025-08-07.
  58. ^ "Can Global Semantic Context Improve Neural Language Models? – Apple". Apple Machine Learning Journal. Retrieved 2025-08-07.
  59. ^ Smith, Chris (2025-08-07). "iOS 10: Siri now works in third-party apps, comes with extra AI features". BGR. Retrieved 2025-08-07.
  60. ^ Capes, Tim; Coles, Paul; Conkie, Alistair; Golipour, Ladan; Hadjitarkhani, Abie; Hu, Qiong; Huddleston, Nancy; Hunt, Melvyn; Li, Jiangchuan; Neeracher, Matthias; Prahallad, Kishore (2025-08-07). "Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System". Interspeech 2017. ISCA: 4011–4015. doi:10.21437/Interspeech.2017-1798.
  61. ^ Vogels, Werner (30 November 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS. – All Things Distributed". www.allthingsdistributed.com. Retrieved 2025-08-07.
  62. ^ Xiong, W.; Wu, L.; Alleva, F.; Droppo, J.; Huang, X.; Stolcke, A. (April 2018). "The Microsoft 2017 Conversational Speech Recognition System". 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. pp. 5934–5938. arXiv:1708.06073. doi:10.1109/ICASSP.2018.8461870. ISBN 978-1-5386-4658-8.
  63. ^ a b c d e f Schmidhuber, Juergen (10 May 2021). "Deep Learning: Our Miraculous Year 1990-1991". arXiv:2005.05744 [cs.NE].
  64. ^ Mozer, Mike (1989). "A Focused Backpropagation Algorithm for Temporal Pattern Recognition". Complex Systems.
  65. ^ Schmidhuber, Juergen (2022). "Annotated History of Modern AI and Deep Learning". arXiv:2212.11279 [cs.NE].
  66. ^ Sepp Hochreiter; Jürgen Schmidhuber (21 August 1995), Long Short Term Memory, Wikidata Q98967430
  67. ^ a b c Gers, Felix; Schmidhuber, Jürgen; Cummins, Fred (1999). "Learning to forget: Continual prediction with LSTM". 9th International Conference on Artificial Neural Networks: ICANN '99. Vol. 1999. pp. 850–855. doi:10.1049/cp:19991218. ISBN 0-85296-721-7.
  68. ^ Cho, Kyunghyun; van Merrienboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (2014). "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation". arXiv:1406.1078 [cs.CL].
  69. ^ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2 May 2015). "Highway Networks". arXiv:1505.00387 [cs.LG].
  70. ^ Srivastava, Rupesh K; Greff, Klaus; Schmidhuber, Juergen (2015). "Training Very Deep Networks". Advances in Neural Information Processing Systems. 28. Curran Associates, Inc.: 2377–2385.
  71. ^ Schmidhuber, Jürgen (2021). "The most cited neural networks all build on work done in my labs". AI Blog. IDSIA, Switzerland. Retrieved 2025-08-07.
  72. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). "Deep Residual Learning for Image Recognition". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 770–778. arXiv:1512.03385. doi:10.1109/CVPR.2016.90. ISBN 978-1-4673-8851-1.
  73. ^ Beck, Maximilian; P?ppel, Korbinian; Spanring, Markus; Auer, Andreas; Prudnikova, Oleksandra; Kopp, Michael; Klambauer, Günter; Brandstetter, Johannes; Hochreiter, Sepp (2025-08-07). "xLSTM: Extended Long Short-Term Memory". arXiv:2405.04517 [cs.LG].
  74. ^ NX-AI/xlstm, NXAI, 2025-08-07, retrieved 2025-08-07
  75. ^ Hochreiter, S.; Younger, A. S.; Conwell, P. R. (2001). "Learning to Learn Using Gradient Descent". Artificial Neural Networks — ICANN 2001 (PDF). Lecture Notes in Computer Science. Vol. 2130. pp. 87–94. CiteSeerX 10.1.1.5.323. doi:10.1007/3-540-44668-0_13. ISBN 978-3-540-42486-4. ISSN 0302-9743. S2CID 52872549.
  76. ^ Graves, Alex; Beringer, Nicole; Eck, Douglas; Schmidhuber, Juergen (2004). Biologically Plausible Speech Recognition with LSTM Neural Nets. Workshop on Biologically Inspired Approaches to Advanced Information Technology, Bio-ADIT 2004, Lausanne, Switzerland. pp. 175–184.
  77. ^ Wierstra, Daan; Foerster, Alexander; Peters, Jan; Schmidhuber, Juergen (2005). "Solving Deep Memory POMDPs with Recurrent Policy Gradients". International Conference on Artificial Neural Networks ICANN'07.
  78. ^ Bayer, Justin; Wierstra, Daan; Togelius, Julian; Schmidhuber, Juergen (2009). "Evolving memory cell structures for sequence learning". International Conference on Artificial Neural Networks ICANN'09, Cyprus.
  79. ^ Graves, A.; Liwicki, M.; Fernández, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (May 2009). "A Novel Connectionist System for Unconstrained Handwriting Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868. CiteSeerX 10.1.1.139.4502. doi:10.1109/tpami.2008.137. ISSN 0162-8828. PMID 19299860. S2CID 14635907.
  80. ^ M?rgner, Volker; Abed, Haikal El (July 2009). "ICDAR 2009 Arabic Handwriting Recognition Competition". 2009 10th International Conference on Document Analysis and Recognition. pp. 1383–1387. doi:10.1109/ICDAR.2009.256. ISBN 978-1-4244-4500-4. S2CID 52851337.
  81. ^ Baytas, Inci M.; Xiao, Cao; Zhang, Xi; Wang, Fei; Jain, Anil K.; Zhou, Jiayu (2025-08-07). "Patient Subtyping via Time-Aware LSTM Networks". Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: Association for Computing Machinery. pp. 65–74. doi:10.1145/3097983.3097997. ISBN 978-1-4503-4887-4.

Further reading

[edit]
[edit]
芭蕉和香蕉有什么区别 头晃动是什么病的前兆 触不可及什么意思 什么是优质蛋白 父亲节应该送什么
腿部青筋明显是什么原因 6月30号是什么星座 裸眼视力是什么意思 萧何字什么 前列腺钙化什么意思
螨虫咬了是什么样子 衣柜放什么代替樟脑丸 芒种可以种什么菜 山药不能和什么一起吃 4月28日什么星座
羊的守护神是什么菩萨 梦见自己的头发长长了是什么意思 什么是子宫内膜异位症 熟女是什么意思 多五行属什么
冷泡茶用什么茶叶aiwuzhiyu.com 女生两个月没来月经是什么原因jinxinzhichuang.com 梦见自己嫁人了预示着什么hcv7jop5ns5r.cn 真身是什么意思baiqunet.com 涵字属于五行属什么hcv9jop1ns0r.cn
75年属什么生肖hcv9jop1ns0r.cn 7月11日什么星座hcv8jop5ns0r.cn 梦见小婴儿是什么意思hcv9jop7ns4r.cn 成吉思汗属什么生肖hcv7jop9ns3r.cn 男性性功能下降是什么原因hcv9jop3ns7r.cn
白痰多是什么原因hcv8jop5ns2r.cn sin是什么边比什么边1949doufunao.com fte是什么意思hcv8jop0ns0r.cn 什么爱心cj623037.com 紫色加绿色是什么颜色hcv9jop1ns7r.cn
麝香是什么动物hcv9jop6ns9r.cn 足字旁的字有什么hcv8jop6ns1r.cn ws什么意思hcv7jop9ns0r.cn 什么是禅定hcv8jop4ns6r.cn 天秤座和什么座最配对jingluanji.com
百度