0.目錄
1.參考
2.問題定位
不間斷空格的unicode表示為 u\xa0',超出gbk編碼范圍?
3.如何處理
.extract_first().replace(u'\xa0', u' ').strip().encode('utf-8','replace')
1.參考
Beautiful Soup and Unicode Problems
詳細解釋
unicodedata.normalize('NFKD',string) 實際作用???
Scrapy : Select tag with non-breaking space with xpath
>>> selector.xpath(u'''
... //p[normalize-space()]
... [not(contains(normalize-space(), "\u00a0"))]
normalize-space() 實際作用???
In [244]: sel.css('.content') Out[244]: [<Selector xpath=u"descendant-or-self::*[@class and contains(concat(' ', normalize-space(@class), ' '), ' content ')]" data=u'<p class="content text-
s.replace(u
'\xa0'
, u'
').encode('
utf
-
8
')
2.問題定位
https://en.wikipedia.org/wiki/Comparison_of_text_editors
定位元素顯示為 &npsp;
網頁源代碼表示為  
<tr> <td style="background: #FFD; color: black; vertical-align: middle; text-align: center;" class="partial table-partial">memory</td> <td>= Limited by available memory   </td> <td style="background:#F99;vertical-align:middle;text-align:center;" class="table-no">No (64 KB)</td> <td>= Some limit less than available memory (give max size if known)</td> </tr> </table>
實際傳輸Hex為:
不間斷空格的unicode表示為 u\xa0',保存的時候編碼 utf-8 則是 '\xc2\xa0'
In [211]: for tr in response.xpath('//table[8]/tr[2]'): ...: print [u''.join(i.xpath('.//text()').extract()) for i in tr.xpath('./*')] ...: [u'memory', u'= Limited by available memory \xa0\xa0', u'No (64\xa0KB)', u'= Some limit less than available memory (give max size if known)'] In [212]: u'No (64\xa0KB)'.encode('utf-8') Out[212]: 'No (64\xc2\xa0KB)' In [213]: u'No (64\xa0KB)'.encode('utf-8').decode('utf-8') Out[213]: u'No (64\xa0KB)'
保存 csv 直接使用 excel 打開會有亂碼(默認ANSI gbk 打開???,u'\xa0' 超出 gbk 能夠編碼范圍???),使用記事本或notepad++能夠自動以 utf-8 正常打開。
使用記事本打開csv文件,另存為 ANSI 編碼,之后 excel 正常打開。超出 gbk 編碼范圍的替換為'?'
3.如何處理
.extract_first().replace(u'\xa0', u' ').strip().encode('utf-8','replace')