2020-06
5

随机单词扎堆成文

By xrspook @ 14:47:54 归类于: 扮IT

从某本书里随机找单词拼出句子段落。重点是把握好前缀和后缀,前缀要捆绑查找,后缀要关联对应。

Exercise 8: Markov analysis: Write a program to read a text from a file and perform Markov analysis. The result should be a dictionary that maps from prefixes to a collection of possible suffixes. The collection might be a list, tuple, or dictionary; it is up to you to make an appropriate choice. You can test your program with prefix length two, but you should write the program in a way that makes it easy to try other lengths. Add a function to the previous program to generate random text based on the Markov analysis. Here is an example from Emma with prefix length 2: He was very clever, be it sweetness or be angry, ashamed or only amused, at such a stroke. She had never thought of Hannah till you were never meant for me?” “I cannot make speeches, Emma:” he soon cut it all himself. For this example, I left the punctuation attached to the words. The result is almost syntactically correct, but not quite. Semantically, it almost makes sense, but not quite. What happens if you increase the prefix length? Does the random text make more sense? Once your program is working, you might want to try a mash-up: if you combine text from two or more books, the random text you generate will blend the vocabulary and phrases from the sources in interesting ways. Credit: This case study is based on an example from Kernighan and Pike, The Practice of Programming, Addison-Wesley, 1999. You should attempt this exercise before you go on; then you can download my solution from http://thinkpython2.com/code/markov.py. You will also need http://thinkpython2.com/code/emma.txt.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import string
import random
from collections import defaultdict
def set_book(fin1,num):
    d = defaultdict(list) # 默认键值为列表
    l = []
    header = ()
    for line in fin1:
        line = line.replace('-', ' ')
        for word in line.rstrip().split(): # 空格换行为分割,单词存入列表
            l.append(word)
    for i in range(len(l)-num): # 以列表序号逐一推进方式建立字典
        header = (l[i-1],) # 元组header为前缀,做键
        for j in range(i,i+num-1):
            header += (l[j],)
            j += 1
        if l[i+num-1] not in d[header]:
            d[header].append(l[i+num-1]) # 列表后缀做键值
    return d
def next(start, book):
    return random.choice(book[start])
fin1 = open('emma.txt', encoding='utf-8')
prefix_num = 3 # 前缀个数
suffix_num = 100 # 后缀个数
book = set_book(fin1,prefix_num)
start = random.choice(list(book.keys())) # 随机前缀开头
final =  start
for i in range(suffix_num): # 截取最后几个单词为前缀找后缀
    final += (next(final[len(final)-prefix_num:], book),) 
for word in final:
    print(word, end=' ')
# reigns alone. A very proper compliment! and then follows the application, 
# which I think, my dear, you said you had a great deal happier if she had no 
# intellectual superiority to make atonement to herself, or frighten those 
# who might hate her into outward respect. She had never seen her look so well, 
# so lovely, so engaging. There was consciousness, animation, and warmth; 
# there was every appearance of its being all in proof of how much he was 
# in love with, how to be able to return! I shall try what I can do. 
# Harriet's features are very delicate, which makes a likeness
2020-06
5

上路

By xrspook @ 8:29:07 归类于: 烂日记

我已经不记得对上一次,写python是什么时候的事了,感觉好遥远,起码一个多月以前。具体时间,我实在记不清了,但是我依然记得,上一次我卡在了哪里,我应该在哪里重新开始。当时我看到的是第14章,但实际上第13章的内容我还没有全部消化掉,前面的那些我花的时间还多一点,后面的那些简直就是囫囵吞枣。第13章最后一道练习题,我觉得自己是无论如何不会去想的了,因为我根本不知道题目到底要我做些什么,之所以这样,大概是因为我的数学学得不好,所以我无法理解题目的意思。但是倒数第二道题目,我觉得自己还是可以做到的。

那是一道从一本书里随机的选择某些单词组成一些可能有意思的句子。随机拼凑句子语意当然乱来,但是如果能保证单词前面和后面相对稳定,那么起码单词组合起来会有某些意思,虽然可能句子的意思还是很无厘头。随着前面后面单词的整体性加强,整个句子的意思也会越发明了。这其实就是一个靠着前缀找后缀的运行模式。开始的时候默认的前缀是两个单词。由前面的两个单词找出后面一个单词,然后再利用后面的两个单词找下一个单词,如此类推。这种方法理论上可以扩展为结合前面N个单词找后面一个单词,然后再撇掉第1个单词,继续找下一个。思路不复杂,但是该用什么实现这个呢?的确是需要点心思的是Think Python那本书没有把所有方法都告诉你,在最终写出这道题目的解答之前,我看过他们的答案,但我觉得自己没看懂,因为里面加入了很多书里之前根本没说过的东西。里面默认带入了很多他们认为你必须知道,所以无需解释的东西。如果这是一本传统的教程,这简直让人日子没法过了!做这本书的习题的时候,我也吐槽过无数次,他们会无底线地超纲。但也正是因为这些说来就来的超纲,让你除了要看这本书以外,你还必须动脑筋,还必须自己手动去搜索解决方法,找那些他们觉得你一定得懂,但实际上他们又没说的东西。最终我写出了我想要的东西,至于结果跟他们的差多远,我没有比较。很多人说python是一种类似于乐高积木的编程,是一个模块叠加一个模块的。但是里面的递归却让我很头晕,所以当参考答案用上全局函数,用上递归的时候,我选择的依然是循环,依然是在主函数里输出那些东西,同时也在一句话里面嵌套了好几个我想做的事。我当然可以把我嵌套的东西单独出来定制一个函数,但是一句话能说清的事情我不想再写几行,虽然在用的时候,多写几行可能会调取得方便一些。现在我之所以不这么干,是因为我要实现的功能暂时来说还很简单。我用一句话就实现了,只不过嵌套了好几个参数而已,Excel的函数也是这么玩的。虽然有些时候,我也会狠狠地吐槽那些几万公里那么长的Excel函数公式。

我从来没想过,自己能在半天之内解决一个之前我曾经想过但是却没想出解决办法的问题。

2020-05
2

改变字典规则不香吗?

By xrspook @ 20:55:44 归类于: 扮IT

改变字典的键值规则就可以把从一本书里挑随机单词这件事轻松搞定,我真搞不懂参考答案为啥要那么折腾。在Think Python 2的第十三章里,字典的默认规则是单词是键,词频是键值。既然这道题要唯一的索引找随机单词,我把键值变成唯一序号不就完事大吉了?再来一个zip把字典的键值和键互换,random.choice()直接就到达随机单词了。我只改了生成字典的规则,耗时0.12秒,参考答案折腾了不只一点点,耗时0.42秒。之所以参考答案不修改字典规则,是因为他们要灌输python拼装模块的特性,拼装很方便,但事实证明效率不一定最高。

This algorithm works, but it is not very efficient; each time you choose a random word, it rebuilds the list, which is as big as the original book. An obvious improvement is to build the list once and then make multiple selections, but the list is still big.

An alternative is: Use keys to get a list of the words in the book. Build a list that contains the cumulative sum of the word frequencies (see Exercise 2). The last item in this list is the total number of words in the book, n. Choose a random number from 1 to n. Use a bisection search (See Exercise 10) to find the index where the random number would be inserted in the cumulative sum. Use the index to find the corresponding word in the word list.

Exercise 7: Write a program that uses this algorithm to choose a random word from the book. Solution: http://thinkpython2.com/code/analyze_book3.py.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import string
import random
from time import time
def set_book(fin1):
    useless = string.punctuation + string.whitespace + '“' + '”' # 标点符号、换行符全部咔嚓掉
    d = {}
    i = 1
    for line in fin1:
        line = line.replace('-', ' ') # 有-的单词全部一分为二,这样真的好吗?
        for word in line.split():
            word = word.strip(useless)
            word = word.lower()
            if word not in d:
                d[word] = i # 录入字典的时候键值就是序号
                i += 1
            # d[word] = d.get(word, 0) + 1 # 反正我不算词频,这个没必要了
    return d
fin1 = open('emma.txt', encoding='utf-8')
start = time()
book1 = set_book(fin1)
book2 = dict(zip(book1.values(), book1.keys())) # 键和键值互换,序号成了唯一索引号
print('100 random words in book')
for i in range(100):
    if i > 1 and i%8 == 0:
        print()
    print(random.choice(book2), end=' ') # 索引号找词,想多快有多快
print()
end = time()
print(end - start)
# 100 random words in book
# solicit laughing preserve inebriety elton's unimpeded effusions unselfish
# intimate connect native judges charities travel informs colours
# enigmas bragge case greensward cox's particularly unexampled promise
# prone greensward dignity maps fourth christmas creature maximum
# graver mildest pleasant corrected increased named partridge marks
# following kept gloom conjecturing parlour inheriting say consulting
# magnified abundant produces sons malt add unenforceability beautifully
# richly striking confuse greatness asleep steps humility upon
# already paper delight liberties confide appendages undecided male
# prophecies esteem unadorned likelihood shopping deeply unbiased horrors
# man's dumplings business chapter shakespeare sees counsels attentive
# silenced ventured singular double mean waltzes requisite checks
# unattended qualified blessed surmises
# 0.12100672721862793
2020-04
30

字典和递归

By xrspook @ 8:48:32 归类于: 烂日记

还记得在看微软的python入门视频的时候。我第一次接触字典这种东西。我觉得那是相当深奥的一件事,因为我搞不懂,那跟列表有什么区别,有什么牛逼的功能。之所以这样,大概是因为微软的视频里字典的录入他们用的是手动输入,显然我觉得一个一个对应太麻烦了。而且用起来的时候我也没发现有什么特别高的效率。入门终归是入门,你看不到字典的牛逼之处,当然就会对学习这东西没什么兴趣。当我在做Think Python 2习题,而且做得死去活来之后。当我用尽九牛二虎之力才终于用列表的方法以二分法搜索的方式找出某个词,而当我后来学习了字典,轻而易举就能找到某个词,效率差了一大截以后,我才明白到字典的超级牛逼之处。显然对我来说,现在我还不怎么熟悉字典这个东西。因为我只是用到了最基本的功能,还有非常多的东西我还不知道,一些它真正的用法我还没有使用到。比如现在的字典我只是停留在第一层级的程度上,通常的键和键值只是某个字符串。如果键值被换成一堆的列表呢?如果一堆列表里面还有一堆的元组呢?想想都觉得这相当恐怖,如果到达那个境界,我该用什么方法去访问那些东西呢?现在对我来说,那是个未解之谜,因为我还没遇到那种题目。

因为工作的原因,我已经放下python一两天了。这种东西一天不练就会手生,下次再重新开始的时候,大概我已经不记得某些语句该怎么写了,于是又得花一些时间去复习之前学过的东西。

还记得在接触python之前,我已经有听说有字典这种牛逼东西。在C语言里好像有,Excel VBA里好像也有,但是我都从来没有用过。因为我觉得那对我来说是非常遥远的存在。在python的学习过程中,我觉得字典就像吃饭睡觉一样,是基本的核心功能。字符串列表字典和元组就像数学里面的加减乘除。当然我这里列举了关系,并不是一一对应的,而是说明他们都是基本的功能,全部都得熟练运用。

还记得貌似是在学递归的那一章。在总结的时候,我记得那里好像说要我们学会不要在一个点上过分纠结,要有全局思维。其实递归并不需要把一个数值竟放进去,然后不断地按照程序进行不断的套用,应该从大局方面想,完成了这个操作,我应该得到什么答案,得到这个答案以后,我就可以把程序继续下去了。这说得简单,但实际上,有时我真想不到,递归最终我能得到什么答案,所以每次我都只能自己很傻地一遍一遍尝试。有些时候我图快,一下子放进去不是最基础的数字,结果就把我自己搞死了。后来才发现,原来一开始进去的东西就应该用最简单的,那才能最快地得出应该有的答案。直到现在,我还是非常难适应这种思维方式。在我没学习递归之前,我已经见识过了斐波那契数列。当时我是用循环的方法实现,但实际上更直观简单的方法是递归。还记得高中的数学里面也经常要我们把某些东西最终以某些方式表达出来,而当时他们给出的题目真是一些递归的东西。所以可能很早以前我就接触过递归了,但是当时没有学编程,也没有计算机,非常苦逼。

如果在学习的时候,能一边的学习编程一边学数学,大概当年就不会那么痛苦了。

2020-04
26

算算书里有多少单词

By xrspook @ 18:12:57 归类于: 扮IT

算算书里有多少单词应该是很大路简单的事,但实际上各种状况层出不穷。有些是你料到的,比如排版的用了全角的标点符号,程序默认会删掉标点符号,万一排版那个没有规范地使用空格呢?有些是你不会料到的,比如手误创造出奇葩字符串。很早以前我就发现Notepad++和Word里算的字数是不一致的,Notepad++通常算出来的数都会大一些。谁对谁错,随缘吧,知道大概差不多也就行了,毕竟高考的时候你写少几个字不到800也不会真扣你的分。

字典和列表的相爱相杀我体会得越来越深刻了。

words.txt在这里,emma.txt在这里。

Exercise 1: Write a program that reads a file, breaks each line into words, strips whitespace and punctuation from the words, and converts them to lowercase. Hint: The string module provides a string named whitespace, which contains space, tab, newline, etc., and punctuation which contains the punctuation characters. Let’s see if we can make Python swear:
>>> import string
>>> string.punctuation
‘!”#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~’
Also, you might consider using the string methods strip, replace and translate.

Exercise 2: Go to Project Gutenberg (http://gutenberg.org) and download your favorite out-of-copyright book in plain text format. Modify your program from the previous exercise to read the book you downloaded, skip over the header information at the beginning of the file, and process the rest of the words as before. Then modify the program to count the total number of words in the book, and the number of times each word is used. Print the number of different words used in the book. Compare different books by different authors, written in different eras. Which author uses the most extensive vocabulary?

Exercise 3: Modify the program from the previous exercise to print the 20 most frequently used words in the book.

Exercise 4: Modify the previous program to read a word list (see Section 9.1) and then print all the words in the book that are not in the word list. How many of them are typos? How many of them are common words that should be in the word list, and how many of them are really obscure?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
import string
fin = open('words.txt')
mydict = {}
for line in fin:
    word = line.strip()
    mydict[word] = ''
file = open('emma.txt', encoding = 'utf-8')
essay = file.read().lower()
essay = essay.replace('-', ' ')
pun = {}
str_all = '“' + '”' + string.punctuation
for x in str_all: # 建立各种标点符号字符的字典
    pun[x] = ''
useless = essay.maketrans(pun) # maketrans必须被替换和替换等长,字典完美解决这个问题
l = essay.translate(useless).split() # 那些含-的单词会死得很惨,但仍然算是个单词
print('this book has', len(l), 'words')
book = {}
for item in l: # 读取文件为字符串,字符串转为单词列表,列表转为计数的字典,单词为键,次数为键值
    book[item] = book.get(item, 0) + 1
list_words1 = sorted(list(zip(book.values(), book.keys())), reverse = True) # 字典转为列表,键与键值换位
print('this book has', len(list_words1), 'different words')
print('times', 'word', sep='\t')
count = 1
word_len = 0 # 限制最小词长
for times, word in list_words1: # 打印大于某长度用得最多的20个词(不限制,3个字母及以下最最简单的会刷屏)
    if len(word) > word_len:
        print(times, word, sep='\t')
        count += 1
    if count > 20:
        break
count = 0
for word in book:
    if word not in mydict:
        # print(word, end=' ')
        count += 1
print(count, 'words in book not in dict') # 结果惨不忍睹,合计590个
# this book has 164065 words
# this book has 7479 different words
# times   word
# 5379    the
# 5322    to
# 4965    and
# 4412    of
# 3191    i
# 3187    a
# 2544    it
# 2483    her
# 2401    was
# 2365    she
# 2246    in
# 2172    not
# 2069    you
# 1995    be
# 1815    that
# 1813    he
# 1626    had
# 1448    as
# 1446    but
# 1373    for
# 590 words in book not in dict
# -----------------------------解法二----------------------------- 其实就是切单词方法有差异
import string
def set_book(fin1):
    useless = string.punctuation + string.whitespace + '“' + '”'
    d = {}
    for line in fin1:
        line = line.replace('-', ' ')
        for word in line.split():
            word = word.strip(useless)
            word = word.lower()
            d[word] = d.get(word, 0) + 1
    return d
def set_dict(fin2):
    d = {}
    for line in fin2:
        word = line.strip()
        d[word] = d.get(word, 0) + 1
    return d
fin1 = open('emma.txt', encoding='utf-8')
fin2 = open('words.txt')
book = set_book(fin1)
mydict = set_dict(fin2)
l = sorted(list(zip(book.values(), book.keys())), reverse=True)
count = 0
for key in book:
    count = count + book[key]
print('this book has', count, 'words')
print('this book has', len(book), 'different words')
num = 20
print(num, 'most common words in this book')
print('times', 'word', sep='\t')
for times, word in l:
    print(times, word, sep='\t')
    num -= 1
    if num < 1:
        break
count = 0
for word in book:
    if word not in mydict:
        # print(word, end=' ')
        count += 1
# print()
print(count, 'words in book not in dict')
# this book has 164120 words
# this book has 7531 different words
# 20 most common words in this book
# times   word
# 5379    the
# 5322    to
# 4965    and
# 4412    of
# 3191    i
# 3187    a
# 2544    it
# 2483    her
# 2401    was
# 2364    she
# 2246    in
# 2172    not
# 2069    you
# 1995    be
# 1815    that
# 1813    he
# 1626    had
# 1448    as
# 1446    but
# 1373    for
# 683 words in book not in dict
© 2004 - 2024 我的天 | Theme by xrspook | Power by WordPress