安裝Hue后的一些功能的問題解決干貨總結(博主推薦)


 

 

 

不多說,直接上干貨!

  我的集群機器情況是 bigdatamaster(192.168.80.10)、bigdataslave1(192.168.80.11)和bigdataslave2(192.168.80.12)

  然后,安裝目錄是在/home/hadoop/app下。

 

  官方建議在master機器上安裝Hue,我這里也不例外。安裝在bigdatamaster機器上。

 

 Hue版本:hue-3.9.0-cdh5.5.4
 需要編譯才能使用(聯網)


 說給大家的話:大家電腦的配置好的話,一定要安裝cloudera manager。畢竟是一家人的。
同時,我也親身經歷過,會有部分組件版本出現問題安裝起來要個大半天時間去排除,做好心里准備。廢話不多說,因為我目前讀研,自己筆記本電腦最大8G,只能玩手動來練手。
純粹是為了給身邊沒高配且條件有限的學生黨看的! 但我已經在實驗室機器群里搭建好cloudera manager 以及 ambari都有。

大數據領域兩大最主流集群管理工具Ambari和Cloudera Manger

Cloudera安裝搭建部署大數據集群(圖文分五大步詳解)(博主強烈推薦)

Ambari安裝搭建部署大數據集群(圖文分五大步詳解)(博主強烈推薦)

 

 

 

 

 

 問題一:

1、HUE中Hive 查詢有問題,頁面報錯:Could not connect to localhost:10000  或者   Could not connect to bigdatamaster:10000

解決方法:

  在安裝的HIVE中啟動hiveserver2 &,因為端口號10000是hiveserver2服務的端口號,否則,Hue Web 控制無法執行HIVE 查詢。

  bigdatamaster是我機器名。

 

  在$HIVE_HOME下

bin/hive -–service hiveserver2 &

[hadoop@bigdatamaster ~]$ cd $HIVE_HOME
[hadoop@bigdatamaster hive]$ bin/hive --service hiveserver2 &

 

  大家,注意,以下是我的hive-site.xml里的配置信息

  該問題,成功解決。

 

 

 

 

 

 

問題二:

  database is locked

 

 

 這應該是hue默認的SQLite數據庫出現錯誤,你可以使用mysql postgresql等來替換

https://www.cloudera.com/documentation/enterprise/5-5-x/topics/admin_hue_ext_db.html(這是官網)

 

 

  同時,參考(見https://my.oschina.net/aibati2008/blog/647493)

  這篇博客:安裝配置和使用hue遇到的問題匯總

  # Configuration options for specifying the Desktop Database. For more info,
  # see http://docs.djangoproject.com/en/1.4/ref/settings/#database-engine
  # ------------------------------------------------------------------------
  [[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "port=0" and then "name=<host>:<port>/<service_name>".
    # Note for MariaDB use the 'mysql' engine.
    ## engine=sqlite3
    ## host=
    ## port=
    ## user=
    ## password=
    ## name=desktop/desktop.db
    ## options={}

   以上是默認的。

  hue默認使用sqlite作為元數據庫,不推薦在生產環境中使用。會經常出現database is lock的問題。

 

解決方法:

  其實官網也有解決方法,不過過程似乎有點問題。而且並不適合3.7之后的版本。我現在使用的是3.11,以下是總結的最快的切換方法。

[root@bigdatamaster hadoop]# mysql -uhive -phive
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 49
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive               |
| mysql              |
| oozie              |
| test               |
+--------------------+
5 rows in set (0.07 sec)

mysql> 

  

 

  因為,我這里是,用戶為hive,密碼也為hive,然后,數據庫也為hive,所以如下:

  # Configuration options for specifying the Desktop Database. For more info,
  # see http://docs.djangoproject.com/en/1.4/ref/settings/#database-engine
  # ------------------------------------------------------------------------
  [[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "port=0" and then "name=<host>:<port>/<service_name>".
    # Note for MariaDB use the 'mysql' engine.
    engine=mysql
    host=bigdatamaster
    port=3306
    user=hive
    password=hive
    name=hive
    ## options={}

 

   然后,重啟hue進程

[hadoop@bigdatamaster hue]$ build/env/bin/supervisor

 

   完成以上的這個配置,啟動Hue,通過瀏覽器訪問,會發生錯誤,原因是mysql數據庫沒有被初始化
DatabaseError: (1146, "Table 'hue.desktop_settings' doesn't exist")

  或者

ProgrammingError: (1146, "Table 'hive.django_session' doesn't exist")

 

   

   初始化數據庫

/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env
 bin/hue syncdb bin/hue migrate

  執行完以后,可以在mysql中看到,hue相應的表已經生成。

  啟動hue, 能夠正常訪問了。

 

   

 

  或者 

   當然,大家這里,可以先在mysql里面創建數據庫。命名為hue,並且是以hadoop用戶和hadoop密碼。

  首先,

[root@master app]# mysql -uroot -prootroot
mysql> create user 'hue' identified by 'hue';    //創建一個賬號:用戶名為hue,密碼為hue


或者
mysql> create user 'hue'@'%' identified by 'hue';    //創建一個賬號:用戶名為hue,密碼為hue
  然后
mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'%' IDENTIFIED BY 'hue' WITH GRANT OPTION;   //將權限授予host為%即所有主機的hue用戶
mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'bigdatamaster' IDENTIFIED BY 'hue' WITH GRANT OPTION;  //將權限授予host為master的hue用戶
mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'localhost' IDENTIFIED BY 'hue' WITH GRANT OPTION; //將權限授予host為localhost的hue用戶(其實這一步可以不配)

 

 

mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'bigdatamaster' IDENTIFIED BY 'hue' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> select user,host,password from mysql.user;
+-------+---------------+-------------------------------------------+
| user  | host          | password                                  |
+-------+---------------+-------------------------------------------+
| root  | localhost     |                                           |
| root  | bigdatamaster |                                           |
| root  | 127.0.0.1     |                                           |
|       | localhost     |                                           |
|       | bigdatamaster |                                           |
| hive  | %             | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| hive  | bigdatamaster | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| hive  | localhost     | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| oozie | %             | *2B03FE0359FAD3B80620490CE614F8622E0828CD |
| oozie | bigdatamaster | *2B03FE0359FAD3B80620490CE614F8622E0828CD |
| oozie | localhost     | *2B03FE0359FAD3B80620490CE614F8622E0828CD |
| hue   | %             | *15221DE9A04689C4D312DEAC3B87DDF542AF439E |
| hue   | localhost     | *15221DE9A04689C4D312DEAC3B87DDF542AF439E |
| hue   | bigdatamaster | *15221DE9A04689C4D312DEAC3B87DDF542AF439E |
+-------+---------------+-------------------------------------------+
15 rows in set (0.00 sec)

mysql> exit;
Bye
[root@bigdatamaster hadoop]# 

 

 

 

  # Configuration options for specifying the Desktop Database. For more info,
  # see http://docs.djangoproject.com/en/1.4/ref/settings/#database-engine
  # ------------------------------------------------------------------------
  [[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "port=0" and then "name=<host>:<port>/<service_name>".
    # Note for MariaDB use the 'mysql' engine.
    engine=mysql
    host=bigdatamaster
    port=3306
    user=hue
    password=hue
    name=hue
    ## options={}

  完成以上的這個配置,啟動Hue,通過瀏覽器訪問,會發生錯誤。比如如下

  如果大家遇到這個問題,別忘記還要創建數據庫命名為hue。

OperationalError: (1049, "Unknown database 'hue'")

 

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive               |
| mysql              |
| oozie              |
| test               |
+--------------------+
5 rows in set (0.00 sec)

mysql> CREATE DATABASE hue;
Query OK, 1 row affected (0.49 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit;
Bye
[root@bigdatamaster hadoop]# 

 

 

 

  啟動hue之后,比如如下的錯誤,原因是mysql數據庫沒有被初始化。

ProgrammingError: (1146, "Table 'hue.django_session' doesn't exist")

 

 

  則初始化數據庫

cd  /home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env
 bin/hue syncdb bin/hue migrate

 

  具體如下(這里一定要注意啦!!!先看完,再動手)

 

 

  這里,大家一定要注意啊,如果這里你輸入的是默認提示hadoop,則在登錄的時候就是為hadoop啦。

 

 

 

 

   當然若這里,大家弄錯了的話,還可以如我下面這樣進行來彌補。

  第一步:

 

 

 

 

 

 

 

 

 

 

 

 

 

  所以,這樣下來,不太好。

 

 

 

 

 

 

   所以我這里,為了避免這個情況發生,直接輸入用戶名為hue,密碼也是為hue。

[hadoop@bigdatamaster env]$ pwd
/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env
[hadoop@bigdatamaster env]$ ll
total 12
drwxrwxr-x 2 hadoop hadoop 4096 May  5 20:59 bin
drwxrwxr-x 2 hadoop hadoop 4096 May  5 20:46 include
drwxrwxr-x 3 hadoop hadoop 4096 May  5 20:46 lib
lrwxrwxrwx 1 hadoop hadoop    3 May  5 20:46 lib64 -> lib
-rw-rw-r-- 1 hadoop hadoop    0 May  5 20:46 stamp
[hadoop@bigdatamaster env]$ bin/hue syncdb
Syncing...
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_groups
Creating table auth_user_user_permissions
Creating table auth_user
Creating table django_openid_auth_nonce
Creating table django_openid_auth_association
Creating table django_openid_auth_useropenid
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Creating table south_migrationhistory
Creating table axes_accessattempt
Creating table axes_accesslog

You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes
Username (leave blank to use 'hadoop'): hue
Email address: 
Password: hue
Password (again): 
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 0 object(s) from 0 fixture(s)

Synced:
 > django.contrib.auth
 > django_openid_auth
 > django.contrib.contenttypes
 > django.contrib.sessions
 > django.contrib.sites
 > django.contrib.staticfiles
 > django.contrib.admin
 > south
 > axes
 > about
 > filebrowser
 > help
 > impala
 > jobbrowser
 > metastore
 > proxy
 > rdbms
 > zookeeper
 > indexer

Not synced (use migrations):
 - django_extensions
 - desktop
 - beeswax
 - hbase
 - jobsub
 - oozie
 - pig
 - search
 - security
 - spark
 - sqoop
 - useradmin
(use ./manage.py migrate to migrate these)
[hadoop@bigdatamaster env]$ 

 

 

 

  然后,再

[hadoop@bigdatamaster env]$ pwd
/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env
[hadoop@bigdatamaster env]$ bin/hue migrate
Running migrations for django_extensions:
 - Migrating forwards to 0001_empty.
 > django_extensions:0001_empty
 - Loading initial data for django_extensions.
Installed 0 object(s) from 0 fixture(s)
Running migrations for desktop:
 - Migrating forwards to 0016_auto__add_unique_document2_uuid_version_is_history.
 > pig:0001_initial
 > oozie:0001_initial
 > oozie:0002_auto__add_hive
 > oozie:0003_auto__add_sqoop
 > oozie:0004_auto__add_ssh
 > oozie:0005_auto__add_shell
 > oozie:0006_auto__chg_field_java_files__chg_field_java_archives__chg_field_sqoop_f
 > oozie:0007_auto__chg_field_sqoop_script_path
 > oozie:0008_auto__add_distcp
 > oozie:0009_auto__add_decision
 > oozie:0010_auto__add_fs
 > oozie:0011_auto__add_email
 > oozie:0012_auto__add_subworkflow__chg_field_email_subject__chg_field_email_body
 > oozie:0013_auto__add_generic
 > oozie:0014_auto__add_decisionend
 > oozie:0015_auto__add_field_dataset_advanced_start_instance__add_field_dataset_ins
 > oozie:0016_auto__add_field_coordinator_job_properties
 > oozie:0017_auto__add_bundledcoordinator__add_bundle
 > oozie:0018_auto__add_field_workflow_managed
 > oozie:0019_auto__add_field_java_capture_output
 > oozie:0020_chg_large_varchars_to_textfields
 > oozie:0021_auto__chg_field_java_args__add_field_job_is_trashed
 > oozie:0022_auto__chg_field_mapreduce_node_ptr__chg_field_start_node_ptr
 > oozie:0022_change_examples_path_format
 - Migration 'oozie:0022_change_examples_path_format' is marked for no-dry-run.
 > oozie:0023_auto__add_field_node_data__add_field_job_data
 > oozie:0024_auto__chg_field_subworkflow_sub_workflow
 > oozie:0025_change_examples_path_format
 - Migration 'oozie:0025_change_examples_path_format' is marked for no-dry-run.
 > desktop:0001_initial
 > desktop:0002_add_groups_and_homedirs
 > desktop:0003_group_permissions
 > desktop:0004_grouprelations
 > desktop:0005_settings
 > desktop:0006_settings_add_tour
 > beeswax:0001_initial
 > beeswax:0002_auto__add_field_queryhistory_notify
 > beeswax:0003_auto__add_field_queryhistory_server_name__add_field_queryhistory_serve
 > beeswax:0004_auto__add_session__add_field_queryhistory_server_type__add_field_query
 > beeswax:0005_auto__add_field_queryhistory_statement_number
 > beeswax:0006_auto__add_field_session_application
 > beeswax:0007_auto__add_field_savedquery_is_trashed
 > beeswax:0008_auto__add_field_queryhistory_query_type
 > desktop:0007_auto__add_documentpermission__add_documenttag__add_document
/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/backends/mysql/base.py:124: Warning: Some non-transactional changed tables couldn't be rolled back
  return self.cursor.execute(query, args)
 > desktop:0008_documentpermission_m2m_tables
 > desktop:0009_auto__chg_field_document_name
 > desktop:0010_auto__add_document2__chg_field_userpreferences_key__chg_field_userpref
 > desktop:0011_auto__chg_field_document2_uuid
 > desktop:0012_auto__chg_field_documentpermission_perms
 > desktop:0013_auto__add_unique_documenttag_owner_tag
 > desktop:0014_auto__add_unique_document_content_type_object_id
 > desktop:0015_auto__add_unique_documentpermission_doc_perms
 > desktop:0016_auto__add_unique_document2_uuid_version_is_history
 - Loading initial data for desktop.
Installed 0 object(s) from 0 fixture(s)
Running migrations for beeswax:
 - Migrating forwards to 0013_auto__add_field_session_properties.
 > beeswax:0009_auto__add_field_savedquery_is_redacted__add_field_queryhistory_is_reda
 > beeswax:0009_auto__chg_field_queryhistory_server_port
 > beeswax:0010_merge_database_state
 > beeswax:0011_auto__chg_field_savedquery_name
 > beeswax:0012_auto__add_field_queryhistory_extra
 > beeswax:0013_auto__add_field_session_properties
 - Loading initial data for beeswax.
Installed 0 object(s) from 0 fixture(s)
Running migrations for hbase:
 - Migrating forwards to 0001_initial.
 > hbase:0001_initial
 - Loading initial data for hbase.
Installed 0 object(s) from 0 fixture(s)
Running migrations for jobsub:
 - Migrating forwards to 0006_chg_varchars_to_textfields.
 > jobsub:0001_initial
 > jobsub:0002_auto__add_ooziestreamingaction__add_oozieaction__add_oozieworkflow__ad
 > jobsub:0003_convertCharFieldtoTextField
 > jobsub:0004_hue1_to_hue2
 - Migration 'jobsub:0004_hue1_to_hue2' is marked for no-dry-run.
 > jobsub:0005_unify_with_oozie
 - Migration 'jobsub:0005_unify_with_oozie' is marked for no-dry-run.
 > jobsub:0006_chg_varchars_to_textfields
 - Loading initial data for jobsub.
Installed 0 object(s) from 0 fixture(s)
Running migrations for oozie:
 - Migrating forwards to 0027_auto__chg_field_node_name__chg_field_job_name.
 > oozie:0026_set_default_data_values
 - Migration 'oozie:0026_set_default_data_values' is marked for no-dry-run.
 > oozie:0027_auto__chg_field_node_name__chg_field_job_name
 - Loading initial data for oozie.
Installed 0 object(s) from 0 fixture(s)
Running migrations for pig:
- Nothing to migrate.
 - Loading initial data for pig.
Installed 0 object(s) from 0 fixture(s)
Running migrations for search:
 - Migrating forwards to 0003_auto__add_field_collection_owner.
 > search:0001_initial
 > search:0002_auto__del_core__add_collection
 > search:0003_auto__add_field_collection_owner
 - Loading initial data for search.
Installed 0 object(s) from 0 fixture(s)
? You have no migrations for the 'security' app. You might want some.
Running migrations for spark:
 - Migrating forwards to 0001_initial.
 > spark:0001_initial
 - Loading initial data for spark.
Installed 0 object(s) from 0 fixture(s)
Running migrations for sqoop:
 - Migrating forwards to 0001_initial.
 > sqoop:0001_initial
 - Loading initial data for sqoop.
Installed 0 object(s) from 0 fixture(s)
Running migrations for useradmin:
 - Migrating forwards to 0006_auto__add_index_userprofile_last_activity.
 > useradmin:0001_permissions_and_profiles
 - Migration 'useradmin:0001_permissions_and_profiles' is marked for no-dry-run.
 > useradmin:0002_add_ldap_support
 - Migration 'useradmin:0002_add_ldap_support' is marked for no-dry-run.
 > useradmin:0003_remove_metastore_readonly_huepermission
 - Migration 'useradmin:0003_remove_metastore_readonly_huepermission' is marked for no-dry-run.
 > useradmin:0004_add_field_UserProfile_first_login
 > useradmin:0005_auto__add_field_userprofile_last_activity
 > useradmin:0006_auto__add_index_userprofile_last_activity
 - Loading initial data for useradmin.
Installed 0 object(s) from 0 fixture(s)
[hadoop@bigdatamaster env]$ 

 

 

 

 執行完以后,可以在mysql中看到,hue相應的表已經生成。

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive               |
| hue                |
| mysql              |
| oozie              |
| test               |
+--------------------+
6 rows in set (0.06 sec)

mysql> use hue;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+--------------------------------+
| Tables_in_hue                  |
+--------------------------------+
| auth_group                     |
| auth_group_permissions         |
| auth_permission                |
| auth_user                      |
| auth_user_groups               |
| auth_user_user_permissions     |
| axes_accessattempt             |
| axes_accesslog                 |
| beeswax_metainstall            |
| beeswax_queryhistory           |
| beeswax_savedquery             |
| beeswax_session                |
| desktop_document               |
| desktop_document2              |
| desktop_document2_dependencies |
| desktop_document2_tags         |
| desktop_document_tags          |
| desktop_documentpermission     |
| desktop_documenttag            |
| desktop_settings               |
| desktop_userpreferences        |
| django_admin_log               |
| django_content_type            |
| django_openid_auth_association |
| django_openid_auth_nonce       |
| django_openid_auth_useropenid  |
| django_session                 |
| django_site                    |
| documentpermission_groups      |
| documentpermission_users       |
| jobsub_checkforsetup           |
| jobsub_jobdesign               |
| jobsub_jobhistory              |
| jobsub_oozieaction             |
| jobsub_ooziedesign             |
| jobsub_ooziejavaaction         |
| jobsub_ooziemapreduceaction    |
| jobsub_ooziestreamingaction    |
| oozie_bundle                   |
| oozie_bundledcoordinator       |
| oozie_coordinator              |
| oozie_datainput                |
| oozie_dataoutput               |
| oozie_dataset                  |
| oozie_decision                 |
| oozie_decisionend              |
| oozie_distcp                   |
| oozie_email                    |
| oozie_end                      |
| oozie_fork                     |
| oozie_fs                       |
| oozie_generic                  |
| oozie_history                  |
| oozie_hive                     |
| oozie_java                     |
| oozie_job                      |
| oozie_join                     |
| oozie_kill                     |
| oozie_link                     |
| oozie_mapreduce                |
| oozie_node                     |
| oozie_pig                      |
| oozie_shell                    |
| oozie_sqoop                    |
| oozie_ssh                      |
| oozie_start                    |
| oozie_streaming                |
| oozie_subworkflow              |
| oozie_workflow                 |
| pig_document                   |
| pig_pigscript                  |
| search_collection              |
| search_facet                   |
| search_result                  |
| search_sorting                 |
| south_migrationhistory         |
| useradmin_grouppermission      |
| useradmin_huepermission        |
| useradmin_ldapgroup            |
| useradmin_userprofile          |
+--------------------------------+
80 rows in set (0.00 sec)

mysql> 

 

 

 

  啟動hue, 能夠正常訪問了。

 

[hadoop@bigdatamaster hue-3.9.0-cdh5.5.4]$ pwd
/home/hadoop/app/hue-3.9.0-cdh5.5.4
[hadoop@bigdatamaster hue-3.9.0-cdh5.5.4]$ build/env/bin/supervisor

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

問題三:

  Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory

 

解決辦法

  把安裝mysql 的虛擬機里找一個直接把libmysqlclient.so.18這個文件拷貝到系統指定的/usr/lib64庫文件目錄中。

  大家,注意,以下是我的hive-site.xml配置信息,我的hive是安在bigdatamaster機器上,

 

  我的這里配置是,

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      ## options={}

    # mysql, oracle, or postgresql configuration.
    [[[mysql]]]
      # Name to show in the UI.
      nice_name="My SQL DB"

      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
      name=hive

      # Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
      engine=mysql

      # IP or hostname of the database to connect to.
      host=bigdatamaster

      # Port the database server is listening to. Defaults are:
      # 1. MySQL: 3306
      # 2. PostgreSQL: 5432
      # 3. Oracle Express Edition: 1521
      port=3306

      # Username to authenticate with when connecting to the database.
      user=hive

      # Password matching the username to authenticate with when
      # connecting to the database.
      password=hive

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      ## options={}

 

  該問題成功得到解決!

 

 

 

問題四(跟博文的問題十五一樣)

  點擊“File Browser”報錯:

  Cannot access:/user/admin."注:您是hue管理員,但不是HDFS超級用戶(即“”HDFS“”)

解決方法:

  在$HADOOP_HOME的etc/hadoop中編輯core-site.xml文件,增加 

     <property>
                 <name>hadoop.proxyuser.oozie.hosts</name>
                  <value>*</value>
         </property>
         <property>
                 <name>hadoop.proxyuser.ozzie.groups</name>
                 <value>*</value>
         </property>  

         <property>
                 <name>hadoop.proxyuser.hue.hosts</name>
                  <value>*</value>
         </property>
         <property>
                 <name>hadoop.proxyuser.hue.groups</name>
                 <value>*</value>
         </property>

  然后重啟hadoop,stop-all.sh----->start-all.sh即可。

 

  該問題成功得到解決!

 

      我一般配置是如下,在$HADOOP_HOME/etc/hadoop/下的core-site.xml里

    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.groups</name>
        <value>*</value>
     </property> 

  為什么要這么加,是因為,我有三個用戶,

  這里,大家根據自己的實情去增加,修改完之后,一定要重啟sbin/start-all.sh,就能解決問題了。

 

 

 

  

 

 

 

問題五

  在Hue對HBase集成時, HBase  Browser 里出現 Api Error: TSocket read 0 bytes

 

 

  解決辦法

   https://stackoverflow.com/questions/20415493/api-error-tsocket-read-0-bytes-when-using-hue-with-hbase

Add this to your hbase "core-site.conf":

<property>
  <name>hbase.thrift.support.proxyuser</name>
  <value>true</value>
</property>

<property>
  <name>hbase.regionserver.thrift.http</name>
  <value>true</value>
</property>

  即可,解決問題。

 

 

 

 

 

 

 

 

  問題六:

   User: hadoop is not allowed to impersonate hue

Api 錯誤:<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/> <title>Error 500 User: hadoop is not allowed to impersonate hue</title> </head> <body><h2>HTTP ERROR 500</h2> <p>Problem accessing /. Reason: <pre> User: hadoop is not allowed to impersonate hue</pre></p><h3>Caused by:</h3><pre>javax.servlet.ServletException: User: hadoop is not allowed to impersonate hue at org.apache.hadoop.hbase.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:117) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelE

 

 

   解決辦法

   $HADOOP_HOME/ect/hadoop的core-site.xml里

<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>

 

 改為

 

<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>hadoop</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>hadoop</value>
</property>

 

 

 

 

  問題七

   Api 錯誤:集群配置 (Cluster|bigdatamaster:9090 的格式設置不正確。

 

 

 

 

 

 

  改為

 

 

 

 

 

 

問題八:

  在hue里面查看HDFS文件瀏覽器報錯:

  當前用戶沒有權限查看, 

  cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby

 

解決方案:

  Web頁面查看兩個NameNode狀態,是不是之前的namenode是standby狀態了. 我現有的集群就是這種情況, 由於之前的服務是master1起的, 掛了之后自動切換到master2, 但是hue中webhdfs還是配置的master1,導致在hue中沒有訪問權限.

 

 

 

  

 問題九

  hive查詢時報錯

  org.apache.hive.service.cli.HiveSQLException: Couldn't find log associated with operation handle: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=b3d05ca6-e3e8-4bef-b869-0ea0732c3ac5]

 

解決方案:

將hive-site.xml中的hive.server2.logging.operation.enabled=true;

<property>
                  <name>hive.server2.logging.operation.enabled</name>
                  <value>true</value>
 </property>

 

 

 

 

 問題十

  啟動hue web端 報錯誤:OperationalError: attempt to write a readonly database

 

解決辦法

  啟動hue server的用戶沒有權限去寫入默認sqlite DB,同時確保安裝目錄下所有文件的owner都是hadoop用戶

chown -R hadoop:hadoop hue-3.9.0-cdh5.5.4

 

 

 

  問題十一

HUE 報錯誤:Filesystem root ‘/’ should be owned by ‘hdfs’

  hue 文件系統根目錄“/”應歸屬於“hdfs”

 

解決方法

  修改 文件desktop/libs/hadoop/src/hadoop/fs/webhdfs.py 中的  DEFAULT_HDFS_SUPERUSER = ‘hdfs’  更改為你的hadoop用戶

 

 

 

 

 

  問題十二

  錯誤:hbase-site.xml 配置文件中缺少 kerberos 主體名稱。

   解決辦法

 

 

 

 

  問題十三

  Hue下無法無法正確連接到 Zookeeper  timed out

 

  解決辦法

  說明你的zookeeper模塊,還沒配置完全。

HUE配置文件hue.ini 的zookeeper模塊詳解(圖文詳解)(分HA集群)

 

 

 

  問題十四

Sqoop 錯誤: 無法獲取連接器。

 

   解決辦法

  看下自己的sqoop版本是不是,不是Sqoop2版本。好比我的如下

 

 

 

  大家要注意,Hue里是只支持Sqoop2版本,對於Sqoop1和Sqoop2版本,直接去官網看就得了。

 

 

   那么,得更換sqoop版本。

 

sqoop2-1.99.5-cdh5.5.4.tar.gz的部署搭建

 

 

 

 

 

  問題十五(見本博客的問題四

無法訪問:/user/hadoop。 Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".

  SecurityException: Failed to obtain user group information: org.apache.hadoop.security.authorize.AuthorizationException: User: hue is not allowed to impersonate hadoop (error 403)

 

 

 

 

 問題分析

  那是因為,HUE安裝完成之后,第一次登錄的用戶就是HUE的超級用戶,可以管理用戶,等等。但是在用的過程發現一個問題這個用戶不能管理HDFS中由supergroup創建的數據。

  雖然在HUE中創建的用戶可以管理自己文件夾下面的數據/user/XXX。那么Hadoop superuser的數據怎么管理呢,HUE提供了一個功能就是將Unix的用戶和Hue集成,這樣用Hadoop superuser的用戶登錄到HUE中就能順利的管理數據了。

 

下面幾個步驟來進行集成

  第一步:確保hadoop 這個用戶組在系統之中(這個hadoop肯定是在系統中了)

 

 

  第二步:運行下面命令

 

[hadoop@bigdatamaster env]$ pwd
/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env
[hadoop@bigdatamaster env]$ ll
total 16
drwxrwxr-x 2 hadoop hadoop 4096 May  5 20:59 bin
drwxrwxr-x 2 hadoop hadoop 4096 May  5 20:46 include
drwxrwxr-x 3 hadoop hadoop 4096 May  5 20:46 lib
lrwxrwxrwx 1 hadoop hadoop    3 May  5 20:46 lib64 -> lib
drwxrwxr-x 2 hadoop hadoop 4096 Aug  2 17:20 logs
-rw-rw-r-- 1 hadoop hadoop    0 May  5 20:46 stamp
[hadoop@bigdatamaster env]$ bin/hue useradmin_sync_with_unix
[hadoop@bigdatamaster env]$ 

 

 

   第三步:

  運行完上面的命令,進入到HUE總你就會發現用戶已經集成進來了,但是,沒有密碼,所以需要給Unix的用戶設定密碼和分配用戶組。

 

 

 

   這里,

 

   

  或者

 

 

 

 

 

 

 

 

 

 

 

 

 

 

   完成上述步驟之后,登陸進去,就能愉快地管理HDFS數據了。

 

 

HUE新建HDFS目錄

  問題描述: you are a Hue admin but not a HDFS superuser, “hdfs” or part of HDFS supergroup, “supergroup”
  解決方案:在hue中新增hdfs用戶,以hdfs用戶登錄創建目錄和上傳文件

  參考

https://geosmart.github.io/2015/10/27/CDH%E4%BD%BF%E7%94%A8%E9%97%AE%E9%A2%98%E8%AE%B0%E5%BD%95/

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  然后,其實還是沒有成功,

  說白了,這個問題還是要看本博文的問題四

 

 

 

 

    我一般配置是如下,在$HADOOP_HOME/etc/hadoop/下的core-site.xml里

    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.groups</name>
        <value>*</value>
     </property> 

  為什么要這么加,是因為,我有三個用戶,

  這里,大家根據自己的實情去增加,修改完之后,一定要重啟sbin/start-all.sh,就能解決問題了。

 

 

 

 

 

 

  問題十六

  無法為用戶hue創建主目錄, 無法為用戶hadoop創建主目錄,無法為用戶hdfs創建主目錄。

 

   

 解決辦法

  在$hadoop/etc/hadoop/core-site.xml 

<property>
        <name>hadoop.proxyuser.hue.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hue.groups</name>
        <value>*</value>
    </property>

 

 

  比如,以下是無法為用戶hdfs創建主目錄,則

    <property>
        <name>hadoop.proxyuser.hdfs.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.groups</name>
        <value>*</value>
     </property> 

  成功!

 

 

 

 

 

 

  問題十七

  User [hue] not defined as proxyuser

  問題來源:

The oozie is running, when I click WorkFlows, there appares a error

  即,oozie啟動后,當我在hue界面點擊workfolws后,出現如下的錯誤

 

 

  解決辦法

Hue submits MapReduce jobs to Oozie as the logged in user. You need to configure Oozie to accept the hue user to be a proxyuser. Specify this in your oozie-site.xml (even in a non-secure cluster), and restart Oozie:

<property>
    <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
    <value>*</value>
</property>
<property>
    <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
    <value>*</value>
</property>

  參考: http://archive.cloudera.com/cdh4/cdh/4/hue-2.0.0-cdh4.0.1/manual.html

 

  即,在oozie-site.xml配置文件里,加入

 

 

  添加好之后,然后重啟oozie。記得先jps下進程,Kill掉,再重啟。

   注意:/home/hadoop/app/oozie是我安裝oozie的目錄。

 

   或者執行以下命令重啟也是可以的。

[hadoop@bigdatamaster oozie]$ pwd
/home/hadoop/app/oozie
[hadoop@bigdatamaster oozie]$ bin/oozied.sh restart

 

  然后,得到

 

 

 

 

 

 

 

 

  問題十八

   Oozie 服務器未運行

 

 

   解決辦法

Oozie的詳細啟動步驟(CDH版本的3節點集群)

 

 

 

 

  問題十九

 Api 錯誤:('Connection aborted.', error(111, 'Connection refused'))

 

 

   解決辦法

 

 

 

 

 問題二十

  Could not connect to bigdatamaster:21050

 

   解決辦法

   開啟impala服務

 

 

 

 

 

  問題二十一:

OperationalError: (2003, "Can't connect to MySQL server on 'bigdatamaster' (111)")

 

  解決辦法

[root@bigdatamaster hadoop]# service mysqld start
Starting mysqld:                                           [  OK  ]
[root@bigdatamaster hadoop]# 

  再刷新即可。



 










問題二十二:

Hue執行./build/env/bin/supervisor出現
IOError: [Errno 13] Permission denied: '/opt/modules/hue-3.9.0-cdh5.5.0/logs/supervisor.log
File "./build/env/bin/supervisor", line 9

[kfk@bigdata-pro01 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor 
Traceback (most recent call last):
  File "./build/env/bin/supervisor", line 9, in <module>
    load_entry_point('desktop==3.9.0', 'console_scripts', 'supervisor')()
  File "/opt/modules/hue-3.9.0-cdh5.5.0/desktop/core/src/desktop/supervisor.py", line 358, in main
    _init_log(log_dir)
  File "/opt/modules/hue-3.9.0-cdh5.5.0/desktop/core/src/desktop/supervisor.py", line 294, in _init_log
    desktop.log.basic_logging(PROC_NAME, log_dir)
  File "/opt/modules/hue-3.9.0-cdh5.5.0/desktop/core/src/desktop/log/__init__.py", line 146, in basic_logging
    logging.config.fileConfig(log_conf)
  File "/usr/lib64/python2.6/logging/config.py", line 84, in fileConfig
    handlers = _install_handlers(cp, formatters)
  File "/usr/lib64/python2.6/logging/config.py", line 162, in _install_handlers
    h = klass(*args)
  File "/usr/lib64/python2.6/logging/handlers.py", line 112, in __init__
    BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)
  File "/usr/lib64/python2.6/logging/handlers.py", line 64, in __init__
    logging.FileHandler.__init__(self, filename, mode, encoding, delay)
  File "/usr/lib64/python2.6/logging/__init__.py", line 835, in __init__
    StreamHandler.__init__(self, self._open())
  File "/usr/lib64/python2.6/logging/__init__.py", line 854, in _open
    stream = open(self.baseFilename, self.mode)
IOError: [Errno 13] Permission denied: '/opt/modules/hue-3.9.0-cdh5.5.0/logs/supervisor.log'

 





  解決辦法:
    只有build是root權限,其他都是普通用戶。

 







  問題二十三:

  當前沒有已配置的數據庫。請轉到您的 Hue 配置並在“rdbms”部分下方添加數據庫。

 


    以下是,默認的
###########################################################################
# Settings for the RDBMS application
###########################################################################

[librdbms]
  # The RDBMS app can have any number of databases configured in the databases
  # section. A database is known by its section name
  # (IE sqlite, mysql, psql, and oracle in the list below).

  [[databases]]
    # sqlite configuration.
    ## [[[sqlite]]]
      # Name to show in the UI.
      ## nice_name=SQLite

      # For SQLite, name defines the path to the database.
      ## name=/tmp/sqlite.db

      # Database backend to use.
      ## engine=sqlite

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      ## options={}

    # mysql, oracle, or postgresql configuration.
    ## [[[mysql]]]
      # Name to show in the UI.
      ## nice_name="My SQL DB"

      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
      ## name=mysqldb

      # Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
      ## engine=mysql

      # IP or hostname of the database to connect to.
      ## host=localhost

      # Port the database server is listening to. Defaults are:
      # 1. MySQL: 3306
      # 2. PostgreSQL: 5432
      # 3. Oracle Express Edition: 1521
      ## port=3306

      # Username to authenticate with when connecting to the database.
      ## user=example

      # Password matching the username to authenticate with when
      # connecting to the database.
      ## password=example

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      ## options={}

 



    很有可能,你在這個地方,沒有細致地更改。

 



  改為

# sqlite configuration.
    [[[sqlite]]]
      # Name to show in the UI.
      nice_name=SQLite

      # For SQLite, name defines the path to the database.
      name=/opt/modules/hue-3.9.0-cdh5.5.0/desktop/desktop.db

      # Database backend to use.
      engine=sqlite

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      ## options={}

 

 

 



   再,停掉mysql,重啟mysql和hue

[kfk@bigdata-pro01 conf]$ sudo service mysqld restart
Stopping mysqld:                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[kfk@bigdata-pro01 conf]$ 

 

 





[kfk@bigdata-pro01 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor 
[INFO] Not running as root, skipping privilege drop
starting server with options:
{'daemonize': False,
 'host': 'bigdata-pro01.kfk.com',
 'pidfile': None,
 'port': 8888,
 'server_group': 'hue',
 'server_name': 'localhost',
 'server_user': 'hue',
 'ssl_certificate': None,
 'ssl_cipher_list': 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA',
 'ssl_private_key': None,
 'threads': 40,
 'workdir': None}

 








 

 

 

 

 

 

 





 







 即,Hue的服務器(名字為My SQL DB,一般是mysql, oracle, or postgresql)。通常用mysql。

      那么,它的數據庫是metastore,是在hive里新建的mysql來充當它的元數據庫。

 <property>
                  <name>javax.jdo.option.ConnectionURL</name>
                  <value>jdbc:mysql://master:3306/metastore?createDatabaseIfNotExist=true</value>
 </property>

 

 

 

    Hue的服務器(名字為SQLite),那么它的數據庫是/opt/modules/hue-3.9.0-cdh5.5.0/desktop/desktop.db













  問題二十四:

hue  HBase Thrift 1 server cannot be contacted:  9090

 










  問題二十五:
Api Error: Invalid method name: 'getTableNames'


產生的原因

后來通過查詢資料,懷疑是客戶端thrift版本和hbase thrift server的thrift版本不一致造成的。

果然thrift server上是使用的thrift2啟動的,而客戶端使用的是thrift訪問的。



解決方法

因為根本原因在於客戶端和服務器thrift版本不一致,那么解決方法有兩個:

服務端以啟動thrift版本的thrift server

hbase 的 thrift server以thrift1方式啟動。

# hbase-daemon.sh stop thrift2       #啟動命令 hbase-daemon.sh start thrift

 
        

 








 
        
 問題二十六:
socket.error: [Errno 98] Address already in use

出現這個問題的原因是:
  在於,當hue還在運行,你也許在命令行里進行了如hue.ini的修改,然后沒正常關閉Hue再開啟。

 






 





 














 問題二十七:

 

 

    




二十八
desktop_settings' doesn't exist

然后,重啟hue進程

[hadoop@bigdatamaster hue]$ build/env/bin/supervisor

 

   完成以上的這個配置,啟動Hue,通過瀏覽器訪問,會發生錯誤,原因是mysql數據庫沒有被初始化
DatabaseError: (1146, "Table 'hue.desktop_settings' doesn't exist")

  或者

ProgrammingError: (1146, "Table 'hive.django_session' doesn't exist")

 

   

   初始化數據庫

/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env
 bin/hue syncdb bin/hue migrate

  執行完以后,可以在mysql中看到,hue相應的表已經生成。

  啟動hue, 能夠正常訪問了。

 

 

 

 

 

  二十九:


build/env/bin/supervisor  執行時,出現 No such file or diretory

 

     解決辦法:

           如果你實在不會安裝,或者說搞不定安裝Hue,則就去別人機器上拷貝一個已經安裝好的Hue。

           放心,是可以放到你的Apache集群或者CDH集群里的,作者親身經歷過。

   

          只是,需要自己手動,將軟連接重新做一遍,這個不難的,刪除,再重新ln -s 即可嘛。

 

 

 



 三十 Hue界面登錄的密碼忘記了??

如我剛開始,用戶是hadoop,密碼也是hadoop。 現在,我想改為hue,密碼也是hue。

 





 

此時,用戶名是由hadoop,改為hue了,但是密碼還沒改過來,別急。


 
        

 




 

 

 

 

 

 

 

 

 

 






歡迎大家,加入我的微信公眾號:大數據躺過的坑        人工智能躺過的坑
 
 
 

同時,大家可以關注我的個人博客

   http://www.cnblogs.com/zlslch/   和     http://www.cnblogs.com/lchzls/      http://www.cnblogs.com/sunnyDream/   

   詳情請見:http://www.cnblogs.com/zlslch/p/7473861.html

 

  人生苦短,我願分享。本公眾號將秉持活到老學到老學習無休止的交流分享開源精神,匯聚於互聯網和個人學習工作的精華干貨知識,一切來於互聯網,反饋回互聯網。
  目前研究領域:大數據、機器學習、深度學習、人工智能、數據挖掘、數據分析。 語言涉及:Java、Scala、Python、Shell、Linux等 。同時還涉及平常所使用的手機、電腦和互聯網上的使用技巧、問題和實用軟件。 只要你一直關注和呆在群里,每天必須有收獲

 

      對應本平台的討論和答疑QQ群:大數據和人工智能躺過的坑(總群)(161156071) 

 

 

 

 

 

 

 

 

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM