121 Commits

Author SHA1 Message Date
Eric Pearson
c46080726d Merge pull request #202 from xinkonglili/weilili
fix(broker):Added windows pipe socket communication, compatible with other systems #199
2024-04-19 17:04:13 +08:00
wei_lilitw
9063c6069c feat(broker):Added pipe socket conditions for different systems 2024-04-19 15:24:14 +08:00
wei_lilitw
d50464571e feat(hmq):add windows pipe socket.Use the open source project npipe, which nicely encapsulates the operations of windows pipe and returns the connection type net.Conn. 2024-04-18 17:31:35 +08:00
joy,zhou
2ceb61a027 update x/net 2024-04-17 14:44:26 +08:00
dependabot[bot]
c75470f5de Bump google.golang.org/protobuf from 1.30.0 to 1.33.0 (#196)
Bumps google.golang.org/protobuf from 1.30.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 19:03:36 +08:00
Lijin
8ddca9bdc3 feat(hmq):Provide handler StartUnixSocketClientListening used to handle the Unix communications (#198)
Co-authored-by: wei_lilitw <wei_lilitw@topsec.com.cn>
2024-04-15 19:03:24 +08:00
Eric Pearson
6c75361f88 Update windows.yml 2024-03-27 17:34:13 +08:00
Eric Pearson
7a603e1a34 Update ubuntu.yml 2024-03-27 17:34:00 +08:00
Eric Pearson
fef923d10a Update go.yml 2024-03-27 17:33:48 +08:00
Eric Pearson
4a85fcb615 Update macos.yml 2024-03-27 17:33:34 +08:00
zhouyy
48b146d64e feat: update 2024-01-08 18:05:40 +08:00
zhouyy
5ba8038ac2 Merge branch 'master' of chowyu08.github.com:fhmq/hmq 2024-01-08 18:01:04 +08:00
zhouyy
1c2d20eaf5 feat: update go version 2024-01-08 17:55:55 +08:00
chowyu12
de2dd52ca4 Merge pull request #194 from fhmq/dependabot/go_modules/golang.org/x/crypto-0.17.0
Bump golang.org/x/crypto from 0.14.0 to 0.17.0
2023-12-19 08:33:17 +08:00
dependabot[bot]
ea619d4f73 Bump golang.org/x/crypto from 0.14.0 to 0.17.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-18 23:49:23 +00:00
chowyu12
35944d774d Merge pull request #193 from spit4520/master
HOTFIX | Fixed pubMsg when WillTopic is null
2023-12-12 16:12:57 +08:00
Scott Joseph Spitler II
cdff42698a Removed un-needed log line 2023-12-11 23:04:24 -05:00
Scott Joseph Spitler II
9fc57423db Fixed pubMsg when WillTopic is null
Previously the broker would run and compile, but would throw a runtime panic if the will was null because of GoLang's inline struct operator. Should REALLY consider adding a unit test
2023-12-11 22:53:41 -05:00
chowyu12
e3fa6573f6 Merge pull request #176 from spit4520/master
Added in GET /connections to update restarting node
2023-12-11 15:45:23 +08:00
Scott Joseph Spitler II
4f98faeefc Added in detailed conn client logs
Created new data types to store the last time a message was received from a device along with publishing the last will topic, keepalive time, and credentials over the /connections topic. These were mirrored also over the REST API for synchronous stateful services starting up from a crashed k8s pod or other usecases. Start by subscribing to /connections/+ and then GET /api/v1/connections to get the open connections, if a device connects while you are setting up your state, your messageHandler will handle the setup since it has the same information. This information has been published over  for devices you don't have any control over and for relay purposes. You can take all of the device information and now create a faux client emulating your downstream device, this may sound strange; but I have a usecase for it, a lot of cheap chinese IoT's were not designed for mass production and we have to fix their messages in the cloud before relaying them to our other legacy servers
2023-12-11 02:30:31 -05:00
zhouyy
805a7b895a update go version 2023-11-06 22:02:45 +08:00
Husy
a94159e79c fix typo for error message (#191) 2023-11-05 19:15:35 -06:00
dependabot[bot]
51adb125dd Bump golang.org/x/net from 0.10.0 to 0.17.0 (#192)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.10.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.10.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-05 19:14:29 -06:00
chowyu12
1ef00a7a50 Merge pull request #188 from fhmq/dependabot/go_modules/github.com/gin-gonic/gin-1.9.1
Bump github.com/gin-gonic/gin from 1.9.0 to 1.9.1
2023-06-02 10:20:20 +08:00
dependabot[bot]
af6f4d280a Bump github.com/gin-gonic/gin from 1.9.0 to 1.9.1
Bumps [github.com/gin-gonic/gin](https://github.com/gin-gonic/gin) from 1.9.0 to 1.9.1.
- [Release notes](https://github.com/gin-gonic/gin/releases)
- [Changelog](https://github.com/gin-gonic/gin/blob/master/CHANGELOG.md)
- [Commits](https://github.com/gin-gonic/gin/compare/v1.9.0...v1.9.1)

---
updated-dependencies:
- dependency-name: github.com/gin-gonic/gin
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-01 20:34:43 +00:00
joy.zhou
73acb5a211 Update README.md 2023-05-05 10:52:04 +08:00
dependabot[bot]
239655d0a1 Bump github.com/gin-gonic/gin from 1.8.2 to 1.9.0 (#184)
Bumps [github.com/gin-gonic/gin](https://github.com/gin-gonic/gin) from 1.8.2 to 1.9.0.
- [Release notes](https://github.com/gin-gonic/gin/releases)
- [Changelog](https://github.com/gin-gonic/gin/blob/master/CHANGELOG.md)
- [Commits](https://github.com/gin-gonic/gin/compare/v1.8.2...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/gin-gonic/gin
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-05 10:51:02 +08:00
Giovanni Rosa
c083b83f3d Fix for Dockerfile smell DL3006 (#183)
Signed-off-by: Giovanni Rosa <g.rosa1@studenti.unimol.it>
2023-04-13 11:31:30 +08:00
Husy
e9f340c38f fix conn.Close() in function handleConnection (#180)
Co-authored-by: husyhu <husyhu@qq.com>
2023-03-29 09:45:12 +08:00
nezhuhos
0daf8bfed9 create: go.yml (#181)
add go.yml
2023-03-29 09:44:51 +08:00
Husy
8430749ec4 rm a redundant creation of goroutines (#179)
Co-authored-by: husyhu <husyhu@qq.com>
2023-02-17 17:01:16 +08:00
zhouyy
15f3f6d52e Merge branch 'master' of chowyu08.github.com:fhmq/hmq 2023-02-16 19:28:02 +08:00
zhouyy
1c4ead691e feat: update go.mod 2023-02-16 19:27:17 +08:00
Scott Joseph Spitler II
3aea177ea8 Added in GET /connections to update restarting node
During a temporary service re-hydration, this allows the recovering service to:

1.) Sub to /connections for all incoming connections
then
2.) Get a list of all connections that exist rn.

This is useful for rebuilding caches and if any of your edge-workers need to be restarted, they can. It also allows for you not to have a 100% coupling in Redis or any other MQ stream. There are still no routes for the MQ streams to get all conns on connect, when we migrate to this we will add in this support as well
2022-09-23 16:57:26 -04:00
chowyu12
b2e79c3bea feat: update lib and replace json 2022-06-18 21:49:55 +08:00
ZhangJian He
5dc2114daf allow bridge mq cost msg (#162) 2022-05-20 21:27:48 +08:00
Lucas Vieira
92758c8c85 refactor: ♻️ fixes typo (#165) 2022-05-20 21:27:25 +08:00
Lucas Vieira
0e3226ece1 Separates CI pipelines (#166)
* refactor: ♻️ fixes typo

* ci: 💚 separates continuous integration pipelines

Signed-off-by: Lucas Vieira <lucas.engen.cc@gmail.com>

* docs: 📚 adds CI badges
2022-05-20 21:26:33 +08:00
Lucas Vieira
061b485a3a fix: 🐛 fixes nil pointer dereference (#163)
checks connection type before accessing values #161

Signed-off-by: Lucas Vieira <lucas.engen.cc@gmail.com>
2022-04-26 10:40:45 +08:00
ZhangJian He
7787d3ca0d fix a misleading annotation (#160) 2022-04-25 20:38:03 +08:00
muXxer
a95c028cb8 Update to Go 1.18 and replace fnv-1a with xxhash (#158)
* Update modules and go 1.18

* Use xxhash instead of fnv-1a

* Update go version in dockerfile and github workflow
2022-04-25 20:37:22 +08:00
muXxer
c53d8f8a0d Fix nil pointer exception in addr logs (#157) 2022-04-09 12:53:43 +08:00
Yog
fa7bf33c60 Update surgemq repositories (#150)
the link  https://github.com/surgemq/surgemq seems to be another repo and this seems to be the current one https://github.com/zentures/surgemq
2022-04-07 10:19:57 +08:00
ZhangJian He
a85e9904c2 Add addr in broekr log (#146) 2022-04-04 20:41:33 +08:00
ZhangJian He
a501565bab Allow publish message by clientId (#147) 2022-04-04 20:41:04 +08:00
Ron Evans
bd5bd04e45 docker: use ENTRYPOINT so command line args can get passed into container when executed (#151)
Signed-off-by: deadprogram <ron@hybridgroup.com>
2022-04-04 20:33:32 +08:00
Lucas Vieira
f8a44be413 fix: 🐛 fixes critical race condition #152 (#154)
* fix: 🐛 fixes critical race condition #152

Signed-off-by: Lucas Vieira <lucas.engen.cc@gmail.com>

* fix: fixes race condition
2022-04-04 20:33:11 +08:00
Lucas Vieira
31864cdf2b fix: 🐛 fixes race condition (#155)
- this race condition occurs when a client is disconnected or when hmq checks
if client still alive

Signed-off-by: Lucas Vieira <lucas.engen.cc@gmail.com>
2022-04-04 20:32:38 +08:00
ZhangJian He
94ff8e8405 Update Support Go version (#143) 2022-01-30 09:14:29 +08:00
chowyu12
bf2b91c535 update: go.mod and go version (#142)
Co-authored-by: zhouyy <zhouyy@ickey.cn>
2022-01-19 11:16:24 +08:00
ZhangJian He
de0cfc6683 Fix typo, delete unused file (#141) 2022-01-19 10:53:03 +08:00
ZhangJian He
332c8a59f7 Drop the support for Golang 1.14 (#139) 2022-01-14 10:08:37 +08:00
ZhangJian He
108e934a85 [cleanup] delete comment out code (#137) 2022-01-10 10:59:39 +08:00
ZhangJian He
46b64e5b84 Close conn when read connect packet error (#136) 2022-01-10 10:59:15 +08:00
ZhangJian He
ab117be4a8 Allow Broker DisConnect connections by ClientId (#135)
* Allow Broker DisConnect connections by ClientId

* Allow Broker DisConnect connections by ClientId
2022-01-05 11:15:45 +08:00
ZhangJian He
878e7fce3f Rmoeve unnecessary type convert (#134) 2022-01-05 11:15:26 +08:00
joy.zhou
8d486c3a20 Update deploy.yaml 2021-12-08 18:13:15 +08:00
dependabot[bot]
764d0402f0 Bump github.com/tidwall/gjson from 1.6.8 to 1.9.3 (#131)
Bumps [github.com/tidwall/gjson](https://github.com/tidwall/gjson) from 1.6.8 to 1.9.3.
- [Release notes](https://github.com/tidwall/gjson/releases)
- [Commits](https://github.com/tidwall/gjson/compare/v1.6.8...v1.9.3)

---
updated-dependencies:
- dependency-name: github.com/tidwall/gjson
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-05 18:15:03 +08:00
Gary Barnett
538bf70f5b First Commit (#130)
Co-authored-by: Gary Barnett <gary.barnett@airsensa.com>
2021-11-05 10:28:58 +08:00
muXxer
1d6979189a use locks around client maps (#126)
Co-authored-by: Luca Moser <moser.luca@gmail.com>
2021-08-10 10:46:38 +08:00
dependabot[bot]
c75ef2d6aa Bump github.com/gin-gonic/gin from 1.4.0 to 1.7.0 (#128)
Bumps [github.com/gin-gonic/gin](https://github.com/gin-gonic/gin) from 1.4.0 to 1.7.0.
- [Release notes](https://github.com/gin-gonic/gin/releases)
- [Changelog](https://github.com/gin-gonic/gin/blob/master/CHANGELOG.md)
- [Commits](https://github.com/gin-gonic/gin/compare/v1.4.0...v1.7.0)

---
updated-dependencies:
- dependency-name: github.com/gin-gonic/gin
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-10 10:45:16 +08:00
TrickIt
068d5e893c fix: close the connection after send connection refused connack packet. (#121) 2021-06-30 14:13:43 +08:00
TrickIt
f66abe5fcb fix: if packet is disconnect from client, then need to break the read packet loop before next loop start, and clear will msg. (#120) 2021-05-31 15:26:26 +08:00
Luca Moser
ccbe364f9f check whether subscription or topics manager are nil before using them in client Close() (#113) 2021-03-18 10:02:49 +08:00
Lucas Vieira
7cc3949bbe Packet fields validation (#111)
* chore: ignore .pre-commit-config.yaml

Signed-off-by: Lucas Vieira <lucas.engen.cc@gmail.com>

* fix: 🐛 perform validation on control packet fields (fhmq/hmq#104)

Signed-off-by: Lucas Vieira <lucas.engen@outlook.com>

* feat: ❇️ add handling of null UTF-8 encoded character

Signed-off-by: Lucas Vieira <lucas.engen@outlook.com>
2021-02-26 15:44:01 +08:00
Lucas Vieira
afe62e0a7d ci: 💚 add build pipeline (#112) 2021-02-26 15:43:29 +08:00
Rajiv Shah
b4baac9c81 Bump gjson to 1.6.8 (#109) 2021-02-23 10:37:13 +08:00
Luca Moser
7bf5d52fd9 use defer to unlock in WriterPacket() (#107) 2021-02-07 14:50:31 +08:00
turtletramp
ad7f4bc3f0 bug-2 adding RWMutex to inflight map and update the map access to use the mutex (#108) 2021-01-18 14:45:38 +08:00
Michael Stapelberg
524a9af060 retry listening indefinitely (#105)
When starting hmq immediately after booting, the IP address specified in the
--host flag (e.g. --host=10.0.0.217) might not be configured yet.

Without this commit, hmq would try to listen, give up, and then just hang.
With this commit, hmq automatically recovers.
2021-01-11 10:21:02 +08:00
joy.zhou
8f187157f3 Revert "op: low performance code (#102)" (#103)
This reverts commit c2248bed2b.
2021-01-07 16:24:09 +08:00
c2248bed2b op: low performance code (#102)
thanks
2021-01-07 14:12:28 +08:00
turtletramp
6be79cbe88 Bugfix - authfile plugin did wrongly use username as IP and IP as username in ACL checks (#100)
* adding test + fix issue with wrong order in acl check

* reduce to featureset from original fork
2020-12-02 10:05:46 +08:00
sngyai
6cb307d252 Feature qos1&qos2 (#99)
* client publish qos2

* server dispatch qos1&qos2

* Use at most one timer for each client

* Use at most one timer for each client
2020-11-30 11:34:03 +08:00
joy,zhou
b8bacb4c3d fixed bug #96 2020-08-26 17:24:22 +08:00
chujiangke
481a61c520 fix (#90) 2020-06-24 15:14:25 +08:00
Rajiv Shah
4782f76048 Replace satori/go.uuid with google/uuid (#89) 2020-06-09 10:13:37 +08:00
Aleksey Myasnikov
1a374f9734 Update comm.go (#85) 2020-05-08 11:26:44 +08:00
janson
3f60d23e85 fix fail in cluster deploy (#86)
Co-authored-by: janson <janson@gmail.com>
2020-05-08 11:26:26 +08:00
yu
3cf90d5231 add websocket client ip 2020-04-16 14:08:51 +08:00
gerdstolpmann
a1bf3d93b2 only set a read deadline when the keep-alive value is positive (#83) 2020-04-16 10:33:17 +08:00
gerdstolpmann
af7db83bdc do not try to set remoteIP for websocket connections (#81) 2020-04-04 10:41:36 +08:00
gerdstolpmann
839041e912 do not expect "Origin" header for websocket connections (#80)
* websocket: do not check the presence of the "Origin" header

* avoid using http.DefaultServeMux
2020-04-04 10:40:12 +08:00
gerdstolpmann
17dac26996 if used as library, allow that the auth and bridge plugins can be set by (#79)
struct, and not only by name
2020-04-03 14:49:50 +08:00
joy.zhou
55f1f1aa80 Update deploy.yaml 2020-01-19 11:19:21 +08:00
joy.zhou
ccb7c37b96 Update svc.yaml 2020-01-19 11:18:44 +08:00
joy.zhou
7e29cc7213 Update svc.yaml 2020-01-19 11:18:38 +08:00
winglq
1971b5c324 update retained message even if it's already there (#70)
Signed-off-by: Liu Qing <winglq@gmail.com>
2020-01-06 11:22:59 +08:00
foosinn
fb453e8c0f fix ipv6 addresses (#68) 2019-12-30 13:42:31 +08:00
joy.zhou
eef900ad2f Update comm.go 2019-12-25 17:14:44 +08:00
joy.zhou
d24e0dac13 Update info.go 2019-12-25 17:14:11 +08:00
joy.zhou
fd0622710b Update client.go 2019-12-25 17:13:44 +08:00
joy.zhou
73dd5bb376 Update config.go 2019-12-25 17:13:16 +08:00
joy.zhou
474c557c7a Update sesson.go 2019-12-25 17:12:59 +08:00
joy.zhou
f3e7e5481a Update auth.go 2019-12-25 17:12:30 +08:00
joy.zhou
57fce9c7dc Update broker.go 2019-12-25 17:12:07 +08:00
joy.zhou
995898c5f4 Update main.go 2019-12-25 17:10:32 +08:00
joy.zhou
2404693bd2 fix issue #66 2019-12-12 15:07:12 +08:00
joy.zhou
68cd5e94a4 Merge branch 'master' of https://github.com/fhmq/hmq 2019-11-14 11:09:52 +08:00
joy.zhou
44fa819f62 update some logic 2019-11-14 11:09:15 +08:00
joy.zhou
2b7bb3fcd5 Update README.md 2019-11-11 21:08:21 +08:00
joy.zhou
4c107c67ab fix bug (#63)
* update

* update auth file

* fixbug
2019-11-11 11:41:38 +08:00
joy.zhou
896769fd9d Add acl (#61)
* update

* update auth file
2019-10-30 14:44:18 +08:00
joy.zhou
c7a51fe68f fixed 2019-09-30 11:06:05 +08:00
joy.zhou
a3fc611615 fix issue 2019-09-30 11:04:46 +08:00
H.K
e74b9facd1 fix: (#57)
topics used but not make
2019-09-30 10:50:40 +08:00
joy.zhou
53a79caad9 update deploy 2019-09-18 14:17:19 +08:00
joy.zhou
55576c1eb3 update kafka plugins 2019-09-18 14:00:19 +08:00
joy.zhou
80b64b147e delete acl file 2019-08-23 16:40:39 +08:00
joy.zhou
ea055d5929 update authhttp 2019-08-23 16:22:59 +08:00
joy.zhou
8d8707801f REMOVE NO USE 2019-08-20 10:27:15 +08:00
joy.zhou
fd2974a546 update Readme 2019-08-19 10:57:29 +08:00
joy.zhou
72211efedf Merge branches 'plugin_update' and 'master' of https://github.com/fhmq/hmq 2019-08-19 10:48:55 +08:00
joy.zhou
7e15da209e Plugin update (#48)
* replace plugin

* update plugin
2019-08-19 10:35:17 +08:00
joy.zhou
69a26f8cd9 update plugin 2019-08-19 10:33:19 +08:00
joy.zhou
148738800b replace plugin 2019-08-16 18:18:19 +08:00
joy.zhou
e4e736d1e2 update readme.md 2019-08-02 10:10:27 +08:00
joy.zhou
4c5a48a44b Plugins update log (#47)
* modify

* update

* add acl

* add feature

* update dockerfile

* add deploy

* update

* update

* plugins

* plugins

* update

* update

* update

* fixed

* remove

* fixed

* add log

* update

* fixed

* update

* fix config

* add http api

* add http api

* resp

* add config for work chan

* update

* fixed

* update

* disable trace

* fixed

* change acl

* fixed

* fixed res

* dd

* dd

* ddd

* dd

* update

* fixed

* update

* add

* fixed

* update key

* add log

* update

* format

* update

* update auth

* update

* update readme

* added

* update

* fixed

* fixed

* fix

* upade

* update

* update

* update
2019-07-25 16:01:40 +08:00
joy.zhou
c6b1f1db42 Plugins support (#46)
* modify

* update

* add acl

* add feature

* update dockerfile

* add deploy

* update

* update

* plugins

* plugins

* update

* update

* update

* fixed

* remove

* fixed

* add log

* update

* fixed

* update

* fix config

* add http api

* add http api

* resp

* add config for work chan

* update

* fixed

* update

* disable trace

* fixed

* change acl

* fixed

* fixed res

* dd

* dd

* ddd

* dd

* update

* fixed

* update

* add

* fixed

* update key

* add log

* update

* format

* update

* update auth

* update

* update readme

* added

* update

* fixed

* fixed

* fix

* upade

* update

* update
2019-07-25 13:54:42 +08:00
Yuyan Zhou
daf4a0e0f5 add vendor 2019-04-24 15:45:34 +08:00
joy.zhou
c350d16ca1 add fix pool for message order (#42)
* fix pool for message order

* add go modules
2019-04-24 14:54:21 +08:00
55 changed files with 3077 additions and 795 deletions

28
.github/workflows/go.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
# This workflow will build a golang project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-go
name: Go
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.21
- name: Build
run: go build -v ./...
- name: Test
run: go test -v ./...

18
.github/workflows/macos.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: MacOS build
on: [push, pull_request]
jobs:
build:
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.21
- name: Build
run: go build -v ./...

18
.github/workflows/ubuntu.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Ubuntu build
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.21
- name: Build
run: go build -v ./...

18
.github/workflows/windows.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Windows build
on: [push, pull_request]
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.21
- name: Build
run: go build -v ./...

11
.gitignore vendored
View File

@@ -1,4 +1,13 @@
hmq
log
log/*
*.test
*.test
# ide
.idea
.vscode/settings.json
.pre-commit-config.yaml
hmq.exe
*.sw*
*.swo
*.swp
*.swn

View File

@@ -1,11 +1,12 @@
FROM alpine
COPY hmq /
COPY ssl /ssl
COPY conf /conf
FROM golang:1.18 as builder
WORKDIR /go/src/github.com/fhmq/hmq
COPY . .
RUN CGO_ENABLED=0 go build -o hmq -a -ldflags '-extldflags "-static"' .
FROM alpine:3.17.3
WORKDIR /
COPY --from=builder /go/src/github.com/fhmq/hmq/hmq .
EXPOSE 1883
EXPOSE 1888
EXPOSE 8883
EXPOSE 1993
CMD ["/hmq"]
ENTRYPOINT ["/hmq"]

View File

@@ -1,3 +1,4 @@
Free and High Performance MQTT Broker
============
@@ -5,8 +6,6 @@ Free and High Performance MQTT Broker
Golang MQTT Broker, Version 3.1.1, and Compatible
for [eclipse paho client](https://github.com/eclipse?utf8=%E2%9C%93&q=mqtt&type=&language=) and mosquitto-client
Download: [click here](https://github.com/fhmq/hmq/releases)
## RUNNING
```bash
$ go get github.com/fhmq/hmq
@@ -60,8 +59,10 @@ Common Options:
"certFile": "tls/server/cert.pem",
"keyFile": "tls/server/key.pem"
},
"acl":true,
"aclConf":"conf/acl.conf"
"plugins": {
"auth": "authhttp",
"bridge": "kafka"
}
}
~~~
@@ -81,7 +82,24 @@ Common Options:
* TLS/SSL Support
* Flexible ACL
* Auth Support
* Auth Connect
* Auth ACL
* Cache Support
* Kafka Bridge Support
* Action Deliver
* Regexp Deliver
* HTTP API
* Disconnect Connect (future more)
### Share SUBSCRIBE
~~~
| Prefix | Examples | Publish |
| ------------------- |-------------------------------------------|--------------------------- --|
| $share/<group>/topic | mosquitto_sub -t $share/<group>/topic | mosquitto_pub -t topic |
~~~
### Cluster
```bash
@@ -92,58 +110,7 @@ Common Options:
2, config router in hmq.config ("router": "127.0.0.1:9888")
```
### ACL Configure
#### The ACL rules define:
~~~
Allow | type | value | pubsub | Topics
~~~
#### ACL Config
~~~
## type clientid , username, ipaddr
##pub 1 , sub 2, pubsub 3
## %c is clientid , %u is username
allow ip 127.0.0.1 2 $SYS/#
allow clientid 0001 3 #
allow username admin 3 #
allow username joy 3 /test,hello/world
allow clientid * 1 toCloud/%c
allow username * 1 toCloud/%u
deny clientid * 3 #
~~~
~~~
#allow local sub $SYS topic
allow ip 127.0.0.1 2 $SYS/#
~~~
~~~
#allow client who's id with 0001 or username with admin pub sub all topic
allow clientid 0001 3 #
allow username admin 3 #
~~~
~~~
#allow client with the username joy can pub sub topic '/test' and 'hello/world'
allow username joy 3 /test,hello/world
~~~
~~~
#allow all client pub the topic toCloud/{clientid/username}
allow clientid * 1 toCloud/%c
allow username * 1 toCloud/%u
~~~
~~~
#deny all client pub sub all topic
deny clientid * 3 #
~~~
Client match acl rule one by one
~~~
--------- --------- ---------
Client -> | Rule1 | --nomatch--> | Rule2 | --nomatch--> | Rule3 | -->
--------- --------- ---------
| | |
match match match
\|/ \|/ \|/
allow | deny allow | deny allow | deny
~~~
Other Version Of Cluster Based On gRPC: [click here](https://github.com/fhmq/rhmq)
### Online/Offline Notification
```bash
@@ -169,4 +136,9 @@ Client -> | Rule1 | --nomatch--> | Rule2 | --nomatch--> | Rule3 | -->
## Reference
* Surgermq.(https://github.com/surgemq/surgemq)
* Surgermq.(https://github.com/zentures/surgemq)
## Benchmark Tool
* https://github.com/inovex/mqtt-stresser
* https://github.com/krylovsk/mqtt-benchmark

View File

@@ -1,81 +1,40 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package broker
import (
"github.com/fhmq/hmq/lib/acl"
"github.com/fsnotify/fsnotify"
"go.uber.org/zap"
"strings"
)
const (
PUB = 1
SUB = 2
SUB = "1"
PUB = "2"
)
func (c *client) CheckTopicAuth(typ int, topic string) bool {
if c.typ != CLIENT || !c.broker.config.Acl {
return true
}
if strings.HasPrefix(topic, "$queue/") {
topic = string([]byte(topic)[7:])
if topic == "" {
return false
func (b *Broker) CheckTopicAuth(action, clientID, username, ip, topic string) bool {
if b.auth != nil {
if strings.HasPrefix(topic, "$SYS/broker/connection/clients/") {
return true
}
if strings.HasPrefix(topic, "$share/") && action == SUB {
substr := groupCompile.FindStringSubmatch(topic)
if len(substr) != 3 {
return false
}
topic = substr[2]
}
return b.auth.CheckACL(action, clientID, username, ip, topic)
}
ip := c.info.remoteIP
username := string(c.info.username)
clientid := string(c.info.clientID)
aclInfo := c.broker.AclConfig
return acl.CheckTopicAuth(aclInfo, typ, ip, username, clientid, topic)
return true
}
var (
watchList = []string{"./conf"}
)
func (b *Broker) handleFsEvent(event fsnotify.Event) error {
switch event.Name {
case b.config.AclConf:
if event.Op&fsnotify.Write == fsnotify.Write ||
event.Op&fsnotify.Create == fsnotify.Create {
log.Info("text:handling acl config change event:", zap.String("filename", event.Name))
aclconfig, err := acl.AclConfigLoad(event.Name)
if err != nil {
log.Error("aclconfig change failed, load acl conf error: ", zap.Error(err))
return err
}
b.AclConfig = aclconfig
}
func (b *Broker) CheckConnectAuth(clientID, username, password string) bool {
if b.auth != nil {
return b.auth.CheckConnect(clientID, username, password)
}
return nil
}
func (b *Broker) StartAclWatcher() {
go func() {
wch, e := fsnotify.NewWatcher()
if e != nil {
log.Error("start monitor acl config file error,", zap.Error(e))
return
}
defer wch.Close()
return true
for _, i := range watchList {
if err := wch.Add(i); err != nil {
log.Error("start monitor acl config file error,", zap.Error(err))
return
}
}
log.Info("watching acl config file change...")
for {
select {
case evt := <-wch.Events:
b.handleFsEvent(evt)
case err := <-wch.Errors:
log.Error("error:", zap.Error(err))
}
}
}()
}

17
broker/bridge.go Normal file
View File

@@ -0,0 +1,17 @@
package broker
import (
"github.com/fhmq/hmq/plugins/bridge"
"go.uber.org/zap"
)
func (b *Broker) Publish(e *bridge.Elements) bool {
if b.bridgeMQ != nil {
cost, err := b.bridgeMQ.Publish(e)
if err != nil {
log.Error("send message to mq error.", zap.Error(err))
}
return cost
}
return false
}

View File

@@ -1,25 +1,25 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package broker
import (
"crypto/tls"
encJson "encoding/json"
"errors"
"fmt"
"net"
"net/http"
_ "net/http/pprof"
"runtime/debug"
"os"
"sync"
"sync/atomic"
"time"
"github.com/eclipse/paho.mqtt.golang/packets"
"github.com/fhmq/hmq/lib/acl"
"github.com/fhmq/hmq/lib/sessions"
"github.com/fhmq/hmq/lib/topics"
"github.com/fhmq/hmq/broker/lib/sessions"
"github.com/fhmq/hmq/broker/lib/topics"
"github.com/fhmq/hmq/plugins/auth"
"github.com/fhmq/hmq/plugins/bridge"
"github.com/fhmq/hmq/pool"
"github.com/shirou/gopsutil/mem"
"github.com/eclipse/paho.mqtt.golang/packets"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"golang.org/x/net/websocket"
)
@@ -35,21 +35,19 @@ type Message struct {
type Broker struct {
id string
cid uint64
mu sync.Mutex
config *Config
tlsConfig *tls.Config
AclConfig *acl.ACLConfig
wpool *pool.WorkerPool
clients sync.Map
routes sync.Map
remotes sync.Map
nodes map[string]interface{}
clusterPool chan *Message
queues map[string]int
topicsMgr *topics.Manager
sessionMgr *sessions.Manager
// messagePool []chan *Message
auth auth.Auth
bridgeMQ bridge.BridgeMQ
}
func newMessagePool() []chan *Message {
@@ -61,13 +59,47 @@ func newMessagePool() []chan *Message {
return pool
}
func getAdditionalLogFields(clientIdentifier string, conn net.Conn, additionalFields ...zapcore.Field) []zapcore.Field {
var wsConn *websocket.Conn = nil
var wsEnabled bool
result := []zapcore.Field{}
switch conn.(type) {
case *websocket.Conn:
wsEnabled = true
wsConn = conn.(*websocket.Conn)
case *net.TCPConn:
wsEnabled = false
}
// add optional fields
if len(additionalFields) > 0 {
result = append(result, additionalFields...)
}
// add client ID
result = append(result, zap.String("clientID", clientIdentifier))
// add remote connection address
if !wsEnabled && conn != nil && conn.RemoteAddr() != nil {
result = append(result, zap.Stringer("addr", conn.RemoteAddr()))
} else if wsEnabled && wsConn != nil && wsConn.Request() != nil {
result = append(result, zap.String("addr", wsConn.Request().RemoteAddr))
}
return result
}
func NewBroker(config *Config) (*Broker, error) {
if config == nil {
config = DefaultConfig
}
b := &Broker{
id: GenUniqueId(),
config: config,
wpool: pool.New(config.Worker),
nodes: make(map[string]interface{}),
queues: make(map[string]int),
clusterPool: make(chan *Message),
}
@@ -92,19 +124,14 @@ func NewBroker(config *Config) (*Broker, error) {
}
b.tlsConfig = tlsconfig
}
if b.config.Acl {
aclconfig, err := acl.AclConfigLoad(b.config.AclConf)
if err != nil {
log.Error("Load acl conf error", zap.Error(err))
return nil, err
}
b.AclConfig = aclconfig
b.StartAclWatcher()
}
b.auth = b.config.Plugin.Auth
b.bridgeMQ = b.config.Plugin.Bridge
return b, nil
}
func (b *Broker) SubmitWork(msg *Message) {
func (b *Broker) SubmitWork(clientId string, msg *Message) {
if b.wpool == nil {
b.wpool = pool.New(b.config.Worker)
}
@@ -112,7 +139,7 @@ func (b *Broker) SubmitWork(msg *Message) {
if msg.client.typ == CLUSTER {
b.clusterPool <- msg
} else {
b.wpool.Submit(func() {
b.wpool.Submit(clientId, func() {
ProcessMessage(msg)
})
}
@@ -125,11 +152,24 @@ func (b *Broker) Start() {
return
}
//listen clinet over tcp
if b.config.HTTPPort != "" {
go InitHTTPMoniter(b)
}
//listen client over tcp
if b.config.Port != "" {
go b.StartClientListening(false)
}
//listen client over unix
if b.config.Port == "" && b.config.UnixFilePath != "" {
go b.StartUnixSocketClientListening(b.config.UnixFilePath, true)
}
//listen client over windows pipe
if b.config.Port == "" && b.config.UnixFilePath == "" && b.config.WindowsPipeName != "" {
go b.StartPipeSocketListening(b.config.WindowsPipeName, true)
}
//listen for cluster
if b.config.Cluster.Port != "" {
go b.StartClusterListening()
@@ -151,125 +191,144 @@ func (b *Broker) Start() {
b.ConnectToDiscovery()
}
//system monitor
go StateMonitor()
if b.config.Debug {
startPProf()
}
}
func startPProf() {
go func() {
http.ListenAndServe(":10060", nil)
}()
}
func StateMonitor() {
v, _ := mem.VirtualMemory()
timeSticker := time.NewTicker(time.Second * 30)
for {
select {
case <-timeSticker.C:
if v.UsedPercent > 75 {
debug.FreeOSMemory()
}
}
}
}
func (b *Broker) StartWebsocketListening() {
path := b.config.WsPath
hp := ":" + b.config.WsPort
log.Info("Start Websocket Listener on:", zap.String("hp", hp), zap.String("path", path))
http.Handle(path, websocket.Handler(b.wsHandler))
ws := &websocket.Server{Handler: websocket.Handler(b.wsHandler)}
mux := http.NewServeMux()
mux.Handle(path, ws)
var err error
if b.config.WsTLS {
err = http.ListenAndServeTLS(hp, b.config.TlsInfo.CertFile, b.config.TlsInfo.KeyFile, nil)
err = http.ListenAndServeTLS(hp, b.config.TlsInfo.CertFile, b.config.TlsInfo.KeyFile, mux)
} else {
err = http.ListenAndServe(hp, nil)
err = http.ListenAndServe(hp, mux)
}
if err != nil {
log.Error("ListenAndServe:" + err.Error())
log.Error("ListenAndServe" + err.Error())
return
}
}
func (b *Broker) wsHandler(ws *websocket.Conn) {
// io.Copy(ws, ws)
atomic.AddUint64(&b.cid, 1)
ws.PayloadType = websocket.BinaryFrame
b.handleConnection(CLIENT, ws)
err := b.handleConnection(CLIENT, ws)
if err != nil {
ws.Close()
}
}
func (b *Broker) StartClientListening(Tls bool) {
var hp string
var err error
var l net.Listener
if Tls {
hp = b.config.TlsHost + ":" + b.config.TlsPort
l, err = tls.Listen("tcp", hp, b.tlsConfig)
log.Info("Start TLS Listening client on ", zap.String("hp", hp))
} else {
hp := b.config.Host + ":" + b.config.Port
l, err = net.Listen("tcp", hp)
log.Info("Start Listening client on ", zap.String("hp", hp))
}
if err != nil {
// Retry listening indefinitely so that specifying IP addresses
// (e.g. --host=10.0.0.217) starts working once the IP address is actually
// configured on the interface.
for {
if Tls {
hp := b.config.TlsHost + ":" + b.config.TlsPort
l, err = tls.Listen("tcp", hp, b.tlsConfig)
log.Info("Start TLS Listening client on ", zap.String("hp", hp))
} else {
hp := b.config.Host + ":" + b.config.Port
l, err = net.Listen("tcp", hp)
log.Info("Start Listening client on ", zap.String("hp", hp))
}
if err == nil {
break // successfully listening
}
log.Error("Error listening on ", zap.Error(err))
return
time.Sleep(1 * time.Second)
}
tmpDelay := 10 * ACCEPT_MIN_SLEEP
for {
conn, err := l.Accept()
if err != nil {
if ne, ok := err.(net.Error); ok && ne.Temporary() {
log.Error("Temporary Client Accept Error(%v), sleeping %dms",
zap.Error(ne), zap.Duration("sleeping", tmpDelay/time.Millisecond))
log.Error(
"Temporary Client Accept Error(%v), sleeping %dms",
zap.Error(ne),
zap.Duration("sleeping", tmpDelay/time.Millisecond),
)
time.Sleep(tmpDelay)
tmpDelay *= 2
if tmpDelay > ACCEPT_MAX_SLEEP {
tmpDelay = ACCEPT_MAX_SLEEP
}
} else {
log.Error("Accept error: %v", zap.Error(err))
log.Error("Accept error", zap.Error(err))
}
continue
}
tmpDelay = ACCEPT_MIN_SLEEP
atomic.AddUint64(&b.cid, 1)
go b.handleConnection(CLIENT, conn)
go func() {
err := b.handleConnection(CLIENT, conn)
if err != nil {
conn.Close()
}
}()
}
}
func (b *Broker) Handshake(conn net.Conn) bool {
func (b *Broker) StartUnixSocketClientListening(socketPath string, unixSocket bool) {
var err error
var l net.Listener
for {
if unixSocket {
if FileExist(socketPath) {
if err != nil {
log.Error("Remove Unix socketPath ", zap.Error(err))
}
}
conn, _ := net.ResolveUnixAddr("unix", socketPath)
l, err = net.ListenUnix("unix", conn)
log.Info("Start Listening client on Unix socket", zap.String("socketPath", socketPath))
}
if err == nil {
break // successfully listening
}
nc := tls.Server(conn, b.tlsConfig)
time.AfterFunc(DEFAULT_TLS_TIMEOUT, func() { TlsTimeout(nc) })
nc.SetReadDeadline(time.Now().Add(DEFAULT_TLS_TIMEOUT))
// Force handshake
if err := nc.Handshake(); err != nil {
log.Error("TLS handshake error, ", zap.Error(err))
return false
log.Error("Error listening on ", zap.Error(err))
time.Sleep(1 * time.Second)
}
nc.SetReadDeadline(time.Time{})
return true
}
tmpDelay := 10 * ACCEPT_MIN_SLEEP
for {
conn, err := l.Accept()
if err != nil {
if ne, ok := err.(net.Error); ok && ne.Temporary() {
log.Error(
"Temporary Client Accept Error(%v), sleeping %dms",
zap.Error(ne),
zap.Duration("sleeping", tmpDelay/time.Millisecond),
)
func TlsTimeout(conn *tls.Conn) {
nc := conn
// Check if already closed
if nc == nil {
return
}
cs := nc.ConnectionState()
if !cs.HandshakeComplete {
log.Error("TLS handshake timeout")
nc.Close()
time.Sleep(tmpDelay)
tmpDelay *= 2
if tmpDelay > ACCEPT_MAX_SLEEP {
tmpDelay = ACCEPT_MAX_SLEEP
}
} else {
log.Error("Accept error", zap.Error(err))
}
continue
}
tmpDelay = ACCEPT_MIN_SLEEP
go func() {
err := b.handleConnection(CLIENT, conn)
if err != nil {
conn.Close()
}
}()
}
}
@@ -279,7 +338,7 @@ func (b *Broker) StartClusterListening() {
l, e := net.Listen("tcp", hp)
if e != nil {
log.Error("Error listening on ", zap.Error(e))
log.Error("Error listening on", zap.Error(e))
return
}
@@ -288,47 +347,82 @@ func (b *Broker) StartClusterListening() {
conn, err := l.Accept()
if err != nil {
if ne, ok := err.(net.Error); ok && ne.Temporary() {
log.Error("Temporary Client Accept Error(%v), sleeping %dms",
zap.Error(ne), zap.Duration("sleeping", tmpDelay/time.Millisecond))
log.Error(
"Temporary Client Accept Error(%v), sleeping %dms",
zap.Error(ne),
zap.Duration("sleeping", tmpDelay/time.Millisecond),
)
time.Sleep(tmpDelay)
tmpDelay *= 2
if tmpDelay > ACCEPT_MAX_SLEEP {
tmpDelay = ACCEPT_MAX_SLEEP
}
} else {
log.Error("Accept error: %v", zap.Error(err))
log.Error("Accept error", zap.Error(err))
}
continue
}
tmpDelay = ACCEPT_MIN_SLEEP
go b.handleConnection(ROUTER, conn)
go func() {
err := b.handleConnection(ROUTER, conn)
if err != nil {
conn.Close()
}
}()
}
}
func (b *Broker) handleConnection(typ int, conn net.Conn) {
func (b *Broker) DisConnClientByClientId(clientId string) {
cli, loaded := b.clients.LoadAndDelete(clientId)
if !loaded {
return
}
conn, success := cli.(*client)
if !success {
return
}
conn.Close()
}
func (b *Broker) handleConnection(typ int, conn net.Conn) error {
//process connect packet
packet, err := packets.ReadPacket(conn)
if err != nil {
log.Error("read connect packet error: ", zap.Error(err))
return
return errors.New(fmt.Sprintf("read connect packet error:%v", err))
}
if packet == nil {
log.Error("received nil packet")
return
return errors.New("received nil packet")
}
msg, ok := packet.(*packets.ConnectPacket)
if !ok {
log.Error("received msg that was not Connect")
return
return errors.New("received msg that was not Connect")
}
log.Info("read connect from ", getAdditionalLogFields(msg.ClientIdentifier, conn)...)
connack := packets.NewControlPacket(packets.Connack).(*packets.ConnackPacket)
connack.ReturnCode = packets.Accepted
connack.SessionPresent = msg.CleanSession
err = connack.Write(conn)
if err != nil {
log.Error("send connack error, ", zap.Error(err), zap.String("clientID", msg.ClientIdentifier))
return
connack.ReturnCode = msg.Validate()
if connack.ReturnCode != packets.Accepted {
if err := connack.Write(conn); err != nil {
return fmt.Errorf("send connack error:%v,clientID:%v,conn:%v", err, msg.ClientIdentifier, conn)
}
return fmt.Errorf("connect packet validate failed with connack.ReturnCode%v", connack.ReturnCode)
}
if typ == CLIENT && !b.CheckConnectAuth(msg.ClientIdentifier, msg.Username, string(msg.Password)) {
connack.ReturnCode = packets.ErrRefusedNotAuthorised
if err := connack.Write(conn); err != nil {
return fmt.Errorf("send connack error:%v,clientID:%v,conn:%v", err, msg.ClientIdentifier, conn)
}
return fmt.Errorf("connect packet CheckConnectAuth failed with connack.ReturnCode%v", connack.ReturnCode)
}
if err := connack.Write(conn); err != nil {
return fmt.Errorf("send connack error:%v,clientID:%v,conn:%v", err, msg.ClientIdentifier, conn)
}
willmsg := packets.NewControlPacket(packets.Publish).(*packets.PublishPacket)
@@ -358,45 +452,62 @@ func (b *Broker) handleConnection(typ int, conn net.Conn) {
c.init()
err = b.getSession(c, msg, connack)
if err != nil {
log.Error("get session error: ", zap.String("clientID", c.info.clientID))
return
if err := b.getSession(c, msg, connack); err != nil {
return fmt.Errorf("get session error:%v,clientID:%v,conn:%v", err, msg.ClientIdentifier, conn)
}
cid := c.info.clientID
var exist bool
var exists bool
var old interface{}
switch typ {
case CLIENT:
old, exist = b.clients.Load(cid)
if exist {
log.Warn("client exist, close old...", zap.String("clientID", c.info.clientID))
ol, ok := old.(*client)
if ok {
old, exists = b.clients.Load(cid)
if exists {
if ol, ok := old.(*client); ok {
log.Warn("client exists, close old client", getAdditionalLogFields(ol.info.clientID, ol.conn)...)
ol.Close()
}
}
b.clients.Store(cid, c)
b.OnlineOfflineNotification(cid, true)
var pubPack = PubPacket{}
if willmsg != nil {
pubPack.TopicName = info.willMsg.TopicName
pubPack.Payload = info.willMsg.Payload
}
pubInfo := Info{
ClientID: info.clientID,
Username: info.username,
Password: info.password,
Keepalive: info.keepalive,
WillMsg: pubPack,
}
b.OnlineOfflineNotification(pubInfo, true, c.lastMsgTime)
{
b.Publish(&bridge.Elements{
ClientID: msg.ClientIdentifier,
Username: msg.Username,
Action: bridge.Connect,
Timestamp: time.Now().Unix(),
})
}
case ROUTER:
old, exist = b.routes.Load(cid)
if exist {
log.Warn("router exist, close old...")
ol, ok := old.(*client)
if ok {
old, exists = b.routes.Load(cid)
if exists {
if ol, ok := old.(*client); ok {
log.Warn("router exists, close old router", getAdditionalLogFields(ol.info.clientID, ol.conn)...)
ol.Close()
}
}
b.routes.Store(cid, c)
}
// mpool := b.messagePool[fnv1a.HashString64(cid)%MessagePoolNum]
c.readLoop()
return nil
}
func (b *Broker) ConnectToDiscovery() {
@@ -406,10 +517,10 @@ func (b *Broker) ConnectToDiscovery() {
for {
conn, err = net.Dial("tcp", b.config.Router)
if err != nil {
log.Error("Error trying to connect to route: ", zap.Error(err))
log.Debug("Connect to route timeout ,retry...")
log.Error("Error trying to connect to route", zap.Error(err))
log.Debug("Connect to route timeout, retry...")
if 0 == tempDelay {
if tempDelay == 0 {
tempDelay = 1 * time.Second
} else {
tempDelay *= 2
@@ -423,7 +534,7 @@ func (b *Broker) ConnectToDiscovery() {
}
break
}
log.Debug("connect to router success :", zap.String("Router", b.config.Router))
log.Debug("connect to router success", zap.String("Router", b.config.Router))
cid := b.id
info := info{
@@ -473,15 +584,15 @@ func (b *Broker) connectRouter(id, addr string) {
conn, err = net.Dial("tcp", addr)
if err != nil {
log.Error("Error trying to connect to route: ", zap.Error(err))
log.Error("Error trying to connect to route", zap.Error(err))
if retryTimes > 50 {
return
}
log.Debug("Connect to route timeout ,retry...")
log.Debug("Connect to route timeout, retry...")
if 0 == timeDelay {
if timeDelay == 0 {
timeDelay = 1 * time.Second
} else {
timeDelay *= 2
@@ -519,7 +630,6 @@ func (b *Broker) connectRouter(id, addr string) {
c.SendConnect()
// mpool := b.messagePool[fnv1a.HashString64(cid)%MessagePoolNum]
go c.readLoop()
go c.StartPing()
@@ -548,71 +658,71 @@ func (b *Broker) checkNodeExist(id, url string) bool {
}
func (b *Broker) CheckRemoteExist(remoteID, url string) bool {
exist := false
exists := false
b.remotes.Range(func(key, value interface{}) bool {
v, ok := value.(*client)
if ok {
if v.route.remoteUrl == url {
v.route.remoteID = remoteID
exist = true
exists = true
return false
}
}
return true
})
return exist
return exists
}
func (b *Broker) SendLocalSubsToRouter(c *client) {
subInfo := packets.NewControlPacket(packets.Subscribe).(*packets.SubscribePacket)
b.clients.Range(func(key, value interface{}) bool {
client, ok := value.(*client)
if ok {
subs := client.subMap
for _, sub := range subs {
subInfo.Topics = append(subInfo.Topics, sub.topic)
subInfo.Qoss = append(subInfo.Qoss, sub.qos)
}
if !ok {
return true
}
client.subMapMu.RLock()
defer client.subMapMu.RUnlock()
subs := client.subMap
for _, sub := range subs {
subInfo.Topics = append(subInfo.Topics, sub.topic)
subInfo.Qoss = append(subInfo.Qoss, sub.qos)
}
return true
})
if len(subInfo.Topics) > 0 {
err := c.WriterPacket(subInfo)
if err != nil {
log.Error("Send localsubs To Router error :", zap.Error(err))
if err := c.WriterPacket(subInfo); err != nil {
log.Error("Send localsubs To Router error", zap.Error(err))
}
}
}
func (b *Broker) BroadcastInfoMessage(remoteID string, msg *packets.PublishPacket) {
b.routes.Range(func(key, value interface{}) bool {
r, ok := value.(*client)
if ok {
if r, ok := value.(*client); ok {
if r.route.remoteID == remoteID {
return true
}
r.WriterPacket(msg)
}
return true
})
// log.Info("BroadcastInfoMessage success ")
}
func (b *Broker) BroadcastSubOrUnsubMessage(packet packets.ControlPacket) {
b.routes.Range(func(key, value interface{}) bool {
r, ok := value.(*client)
if ok {
if r, ok := value.(*client); ok {
r.WriterPacket(packet)
}
return true
})
// log.Info("BroadcastSubscribeMessage remotes: ", s.remotes)
}
func (b *Broker) removeClient(c *client) {
clientId := string(c.info.clientID)
clientId := c.info.clientID
typ := c.typ
switch typ {
case CLIENT:
@@ -622,7 +732,6 @@ func (b *Broker) removeClient(c *client) {
case REMOTE:
b.remotes.Delete(clientId)
}
// log.Info("delete client ,", clientId)
}
func (b *Broker) PublishMessage(packet *packets.PublishPacket) {
@@ -632,38 +741,81 @@ func (b *Broker) PublishMessage(packet *packets.PublishPacket) {
err := b.topicsMgr.Subscribers([]byte(packet.TopicName), packet.Qos, &subs, &qoss)
b.mu.Unlock()
if err != nil {
log.Error("search sub client error, ", zap.Error(err))
log.Error("search sub client error", zap.Error(err))
return
}
for _, sub := range subs {
s, ok := sub.(*subscription)
if ok {
err := s.client.WriterPacket(packet)
if err != nil {
log.Error("write message error, ", zap.Error(err))
if err := s.client.WriterPacket(packet); err != nil {
log.Error("write message error", zap.Error(err))
}
}
}
}
func (b *Broker) BroadcastUnSubscribe(subs map[string]*subscription) {
unsub := packets.NewControlPacket(packets.Unsubscribe).(*packets.UnsubscribePacket)
for topic, _ := range subs {
unsub.Topics = append(unsub.Topics, topic)
func (b *Broker) PublishMessageByClientId(packet *packets.PublishPacket, clientId string) error {
cli, loaded := b.clients.LoadAndDelete(clientId)
if !loaded {
return fmt.Errorf("clientId %s not connected", clientId)
}
if len(unsub.Topics) > 0 {
b.BroadcastSubOrUnsubMessage(unsub)
conn, success := cli.(*client)
if !success {
return fmt.Errorf("clientId %s loaded fail", clientId)
}
return conn.WriterPacket(packet)
}
func (b *Broker) OnlineOfflineNotification(clientID string, online bool) {
func (b *Broker) BroadcastUnSubscribe(topicsToUnSubscribeFrom []string) {
if len(topicsToUnSubscribeFrom) == 0 {
return
}
unsub := packets.NewControlPacket(packets.Unsubscribe).(*packets.UnsubscribePacket)
unsub.Topics = append(unsub.Topics, topicsToUnSubscribeFrom...)
b.BroadcastSubOrUnsubMessage(unsub)
}
type OnlineOfflineMsg struct {
ClientID string `json:"clientID"`
Online bool `json:"online"`
Timestamp string `json:"timestamp"`
ClientInfo Info `json:"info"`
LastMsgTime int64 `json:"lastMsg"`
}
func (b *Broker) OnlineOfflineNotification(info Info, online bool, lastMsg int64) {
packet := packets.NewControlPacket(packets.Publish).(*packets.PublishPacket)
packet.TopicName = "$SYS/broker/connection/clients/" + clientID
packet.TopicName = "$SYS/broker/connection/clients/" + info.ClientID
packet.Qos = 0
packet.Payload = []byte(fmt.Sprintf(`{"clientID":"%s","online":%v,"timestamp":"%s"}`, clientID, online, time.Now().UTC().Format(time.RFC3339)))
msg := OnlineOfflineMsg{
ClientID: info.ClientID,
Online: online,
Timestamp: time.Now().UTC().Format(time.RFC3339),
ClientInfo: info,
LastMsgTime: lastMsg,
}
if b, err := encJson.Marshal(msg); err != nil {
//This is a TERRIBLE situation, falling back to legacy format to not break API Contract
packet.Payload = []byte(fmt.Sprintf(`{"clientID":"%s","online":%v,"timestamp":"%s"}`, info.ClientID, online, time.Now().UTC().Format(time.RFC3339)))
} else {
packet.Payload = b
}
b.PublishMessage(packet)
}
func FileExist(name string) bool {
_, err := os.Stat(name)
if err == nil {
return true
} else if os.IsNotExist(err) {
return false
} else {
panic(err)
}
}

View File

@@ -1,24 +1,29 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package broker
import (
"bytes"
"context"
"errors"
"math/rand"
"net"
"reflect"
"regexp"
"strings"
"sync"
"time"
"unicode/utf8"
"github.com/eapache/queue"
"github.com/eclipse/paho.mqtt.golang/packets"
"github.com/fhmq/hmq/lib/sessions"
"github.com/fhmq/hmq/lib/topics"
"github.com/fhmq/hmq/broker/lib/sessions"
"github.com/fhmq/hmq/broker/lib/topics"
"github.com/fhmq/hmq/plugins/bridge"
"go.uber.org/zap"
"golang.org/x/net/websocket"
)
const (
// special pub topic for cluster info BrokerInfoTopic
// BrokerInfoTopic special pub topic for cluster info
BrokerInfoTopic = "broker000100101info"
// CLIENT is an end user.
CLIENT = 0
@@ -28,34 +33,73 @@ const (
REMOTE = 2
CLUSTER = 3
)
const (
_GroupTopicRegexp = `^\$share/([0-9a-zA-Z_-]+)/(.*)$`
)
const (
Connected = 1
Disconnected = 2
)
const (
awaitRelTimeout int64 = 20
retryInterval int64 = 20
)
var (
groupCompile = regexp.MustCompile(_GroupTopicRegexp)
)
type client struct {
typ int
mu sync.Mutex
broker *Broker
conn net.Conn
info info
route route
status int
ctx context.Context
cancelFunc context.CancelFunc
session *sessions.Session
subMap map[string]*subscription
topicsMgr *topics.Manager
subs []interface{}
qoss []byte
rmsgs []*packets.PublishPacket
typ int
mu sync.Mutex
broker *Broker
conn net.Conn
info info
route route
status int
ctx context.Context
cancelFunc context.CancelFunc
session *sessions.Session
subMap map[string]*subscription
subMapMu sync.RWMutex
topicsMgr *topics.Manager
subs []interface{}
qoss []byte
rmsgs []*packets.PublishPacket
routeSubMap map[string]uint64
routeSubMapMu sync.Mutex
awaitingRel map[uint16]int64
awaitingRelMu sync.RWMutex
maxAwaitingRel int
inflight map[uint16]*inflightElem
inflightMu sync.RWMutex
mqueue *queue.Queue
retryTimer *time.Timer
retryTimerLock sync.Mutex
lastMsgTime int64
}
type InflightStatus uint8
const (
Publish InflightStatus = 0
Pubrel InflightStatus = 1
)
type inflightElem struct {
status InflightStatus
packet *packets.PublishPacket
timestamp int64
}
type subscription struct {
client *client
topic string
qos byte
queue bool
client *client
topic string
qos byte
share bool
groupName string
}
type info struct {
@@ -68,22 +112,49 @@ type info struct {
remoteIP string
}
type PubPacket struct {
TopicName string `json:"topicName"`
Payload []byte `json:"payload"`
}
type Info struct {
ClientID string `json:"clientId"`
Username string `json:"username"`
Password []byte `json:"password"`
Keepalive uint16 `json:"keepalive"`
WillMsg PubPacket `json:"willMsg"`
}
type route struct {
remoteID string
remoteUrl string
}
var (
DisconnectdPacket = packets.NewControlPacket(packets.Disconnect).(*packets.DisconnectPacket)
DisconnectedPacket = packets.NewControlPacket(packets.Disconnect).(*packets.DisconnectPacket)
r = rand.New(rand.NewSource(time.Now().UnixNano()))
)
func (c *client) init() {
c.lastMsgTime = time.Now().Unix() //mark the connection packet time as last time messaged
c.status = Connected
c.info.localIP = strings.Split(c.conn.LocalAddr().String(), ":")[0]
c.info.remoteIP = strings.Split(c.conn.RemoteAddr().String(), ":")[0]
c.info.localIP, _, _ = net.SplitHostPort(c.conn.LocalAddr().String())
remoteAddr := c.conn.RemoteAddr()
remoteNetwork := remoteAddr.Network()
c.info.remoteIP = ""
if remoteNetwork != "websocket" {
c.info.remoteIP, _, _ = net.SplitHostPort(remoteAddr.String())
} else {
ws := c.conn.(*websocket.Conn)
c.info.remoteIP, _, _ = net.SplitHostPort(ws.Request().RemoteAddr)
}
c.ctx, c.cancelFunc = context.WithCancel(context.Background())
c.subMap = make(map[string]*subscription)
c.topicsMgr = c.broker.topicsMgr
c.routeSubMap = make(map[string]uint64)
c.awaitingRel = make(map[uint16]int64)
c.inflight = make(map[uint16]*inflightElem)
c.mqueue = queue.New()
}
func (c *client) readLoop() {
@@ -102,46 +173,186 @@ func (c *client) readLoop() {
return
default:
//add read timeout
if err := nc.SetReadDeadline(time.Now().Add(timeOut)); err != nil {
log.Error("set read timeout error: ", zap.Error(err), zap.String("ClientID", c.info.clientID))
return
if keepAlive > 0 {
if err := nc.SetReadDeadline(time.Now().Add(timeOut)); err != nil {
log.Error("set read timeout error: ", zap.Error(err), zap.String("ClientID", c.info.clientID))
msg := &Message{
client: c,
packet: DisconnectedPacket,
}
b.SubmitWork(c.info.clientID, msg)
return
}
}
packet, err := packets.ReadPacket(nc)
if err != nil {
log.Error("read packet error: ", zap.Error(err), zap.String("ClientID", c.info.clientID))
msg := &Message{client: c, packet: DisconnectdPacket}
b.SubmitWork(msg)
msg := &Message{
client: c,
packet: DisconnectedPacket,
}
b.SubmitWork(c.info.clientID, msg)
return
}
// if packet is disconnect from client, then need to break the read packet loop and clear will msg.
if _, isDisconnect := packet.(*packets.DisconnectPacket); isDisconnect {
c.info.willMsg = nil
c.cancelFunc()
} else {
c.lastMsgTime = time.Now().Unix()
}
msg := &Message{
client: c,
packet: packet,
}
b.SubmitWork(msg)
b.SubmitWork(c.info.clientID, msg)
}
}
}
// extractPacketFields function reads a control packet and extracts only the fields
// that needs to pass on UTF-8 validation
func extractPacketFields(msgPacket packets.ControlPacket) []string {
var fields []string
// Get packet type
switch msgPacket.(type) {
case *packets.ConnackPacket:
case *packets.ConnectPacket:
case *packets.PublishPacket:
packet := msgPacket.(*packets.PublishPacket)
fields = append(fields, packet.TopicName)
break
case *packets.SubscribePacket:
case *packets.SubackPacket:
case *packets.UnsubscribePacket:
packet := msgPacket.(*packets.UnsubscribePacket)
fields = append(fields, packet.Topics...)
break
}
return fields
}
// validatePacketFields function checks if any of control packets fields has ill-formed
// UTF-8 string
func validatePacketFields(msgPacket packets.ControlPacket) (validFields bool) {
// Extract just fields that needs validation
fields := extractPacketFields(msgPacket)
for _, field := range fields {
// Perform the basic UTF-8 validation
if !utf8.ValidString(field) {
validFields = false
return
}
// A UTF-8 encoded string MUST NOT include an encoding of the null
// character U+0000
// If a receiver (Server or Client) receives a Control Packet containing U+0000
// it MUST close the Network Connection
// http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.pdf page 14
if bytes.ContainsAny([]byte(field), "\u0000") {
validFields = false
return
}
}
// All fields have been validated successfully
validFields = true
return
}
func ProcessMessage(msg *Message) {
c := msg.client
ca := msg.packet
if ca == nil {
return
}
log.Debug("Recv message:", zap.String("message type", reflect.TypeOf(msg.packet).String()[9:]), zap.String("ClientID", c.info.clientID))
if c.typ == CLIENT {
log.Debug("Recv message:", zap.String("message type", reflect.TypeOf(msg.packet).String()[9:]), zap.String("ClientID", c.info.clientID))
}
// Perform field validation
if !validatePacketFields(ca) {
// http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.pdf
// Page 14
//
// If a Server or Client receives a Control Packet
// containing ill-formed UTF-8 it MUST close the Network Connection
_ = c.conn.Close()
// Update client status
//c.status = Disconnected
log.Error("Client disconnected due to malformed packet", zap.String("ClientID", c.info.clientID))
return
}
switch ca.(type) {
case *packets.ConnackPacket:
case *packets.ConnectPacket:
case *packets.PublishPacket:
packet := ca.(*packets.PublishPacket)
c.ProcessPublish(packet)
case *packets.PubackPacket:
packet := ca.(*packets.PubackPacket)
c.inflightMu.Lock()
if _, found := c.inflight[packet.MessageID]; found {
delete(c.inflight, packet.MessageID)
} else {
log.Error("Duplicated PUBACK PacketId", zap.Uint16("MessageID", packet.MessageID))
}
c.inflightMu.Unlock()
case *packets.PubrecPacket:
packet := ca.(*packets.PubrecPacket)
c.inflightMu.RLock()
ielem, found := c.inflight[packet.MessageID]
c.inflightMu.RUnlock()
if found {
if ielem.status == Publish {
ielem.status = Pubrel
ielem.timestamp = time.Now().Unix()
} else if ielem.status == Pubrel {
log.Error("Duplicated PUBREC PacketId", zap.Uint16("MessageID", packet.MessageID))
}
} else {
log.Error("The PUBREC PacketId is not found.", zap.Uint16("MessageID", packet.MessageID))
}
pubrel := packets.NewControlPacket(packets.Pubrel).(*packets.PubrelPacket)
pubrel.MessageID = packet.MessageID
if err := c.WriterPacket(pubrel); err != nil {
log.Error("send pubrel error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
return
}
case *packets.PubrelPacket:
packet := ca.(*packets.PubrelPacket)
_ = c.pubRel(packet.MessageID)
pubcomp := packets.NewControlPacket(packets.Pubcomp).(*packets.PubcompPacket)
pubcomp.MessageID = packet.MessageID
if err := c.WriterPacket(pubcomp); err != nil {
log.Error("send pubcomp error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
return
}
case *packets.PubcompPacket:
packet := ca.(*packets.PubcompPacket)
c.inflightMu.Lock()
delete(c.inflight, packet.MessageID)
c.inflightMu.Unlock()
case *packets.SubscribePacket:
packet := ca.(*packets.SubscribePacket)
c.ProcessSubscribe(packet)
@@ -161,18 +372,32 @@ func ProcessMessage(msg *Message) {
}
func (c *client) ProcessPublish(packet *packets.PublishPacket) {
switch c.typ {
case CLIENT:
c.processClientPublish(packet)
case ROUTER:
c.processRouterPublish(packet)
case CLUSTER:
c.processRemotePublish(packet)
}
}
func (c *client) processRemotePublish(packet *packets.PublishPacket) {
if c.status == Disconnected {
return
}
topic := packet.TopicName
if topic == BrokerInfoTopic && c.typ == CLUSTER {
if topic == BrokerInfoTopic {
c.ProcessInfo(packet)
return
}
if !c.CheckTopicAuth(PUB, topic) {
log.Error("Pub Topics Auth failed, ", zap.String("topic", topic), zap.String("ClientID", c.info.clientID))
}
func (c *client) processRouterPublish(packet *packets.PublishPacket) {
if c.status == Disconnected {
return
}
@@ -196,6 +421,60 @@ func (c *client) ProcessPublish(packet *packets.PublishPacket) {
}
func (c *client) processClientPublish(packet *packets.PublishPacket) {
topic := packet.TopicName
if !c.broker.CheckTopicAuth(PUB, c.info.clientID, c.info.username, c.info.remoteIP, topic) {
log.Error("Pub Topics Auth failed, ", zap.String("topic", topic), zap.String("ClientID", c.info.clientID))
return
}
//publish to bridge mq
cost := c.broker.Publish(&bridge.Elements{
ClientID: c.info.clientID,
Username: c.info.username,
Action: bridge.Publish,
Timestamp: time.Now().Unix(),
Payload: string(packet.Payload),
Topic: topic,
})
if cost {
return
}
switch packet.Qos {
case QosAtMostOnce:
c.ProcessPublishMessage(packet)
case QosAtLeastOnce:
puback := packets.NewControlPacket(packets.Puback).(*packets.PubackPacket)
puback.MessageID = packet.MessageID
if err := c.WriterPacket(puback); err != nil {
log.Error("send puback error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
return
}
c.ProcessPublishMessage(packet)
case QosExactlyOnce:
if err := c.registerPublishPacketId(packet.MessageID); err != nil {
return
} else {
pubrec := packets.NewControlPacket(packets.Pubrec).(*packets.PubrecPacket)
pubrec.MessageID = packet.MessageID
if err := c.WriterPacket(pubrec); err != nil {
log.Error("send pubrec error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
return
}
c.ProcessPublishMessage(packet)
}
return
default:
log.Error("publish with unknown qos", zap.String("ClientID", c.info.clientID))
return
}
}
func (c *client) ProcessPublishMessage(packet *packets.PublishPacket) {
b := c.broker
@@ -210,20 +489,18 @@ func (c *client) ProcessPublishMessage(packet *packets.PublishPacket) {
}
}
c.mu.Lock()
err := c.topicsMgr.Subscribers([]byte(packet.TopicName), packet.Qos, &c.subs, &c.qoss)
c.mu.Unlock()
if err != nil {
log.Error("Error retrieving subscribers list: ", zap.String("ClientID", c.info.clientID))
return
}
// log.Info("psubs num: ", len(r.psubs))
if len(c.subs) == 0 {
return
}
for _, sub := range c.subs {
var qsub []int
for i, sub := range c.subs {
s, ok := sub.(*subscription)
if ok {
if s.client.typ == ROUTER {
@@ -231,17 +508,36 @@ func (c *client) ProcessPublishMessage(packet *packets.PublishPacket) {
continue
}
}
err := s.client.WriterPacket(packet)
if err != nil {
log.Error("process message for psub error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
if s.share {
qsub = append(qsub, i)
} else {
publish(s, packet)
}
}
}
if len(qsub) > 0 {
idx := r.Intn(len(qsub))
sub := c.subs[qsub[idx]].(*subscription)
publish(sub, packet)
}
}
func (c *client) ProcessSubscribe(packet *packets.SubscribePacket) {
switch c.typ {
case CLIENT:
c.processClientSubscribe(packet)
case ROUTER:
fallthrough
case REMOTE:
c.processRouterSubscribe(packet)
}
}
func (c *client) processClientSubscribe(packet *packets.SubscribePacket) {
if c.status == Disconnected {
return
}
@@ -250,38 +546,73 @@ func (c *client) ProcessSubscribe(packet *packets.SubscribePacket) {
if b == nil {
return
}
topics := packet.Topics
subTopics := packet.Topics
qoss := packet.Qoss
suback := packets.NewControlPacket(packets.Suback).(*packets.SubackPacket)
suback.MessageID = packet.MessageID
var retcodes []byte
for i, topic := range topics {
for i, topic := range subTopics {
t := topic
//check topic auth for client
if !c.CheckTopicAuth(SUB, topic) {
if !b.CheckTopicAuth(SUB, c.info.clientID, c.info.username, c.info.remoteIP, topic) {
log.Error("Sub topic Auth failed: ", zap.String("topic", topic), zap.String("ClientID", c.info.clientID))
retcodes = append(retcodes, QosFailure)
continue
}
b.Publish(&bridge.Elements{
ClientID: c.info.clientID,
Username: c.info.username,
Action: bridge.Subscribe,
Timestamp: time.Now().Unix(),
Topic: topic,
})
groupName := ""
share := false
if strings.HasPrefix(topic, "$share/") {
substr := groupCompile.FindStringSubmatch(topic)
if len(substr) != 3 {
retcodes = append(retcodes, QosFailure)
continue
}
share = true
groupName = substr[1]
topic = substr[2]
}
c.subMapMu.Lock()
if oldSub, exist := c.subMap[t]; exist {
_ = c.topicsMgr.Unsubscribe([]byte(oldSub.topic), oldSub)
delete(c.subMap, t)
}
c.subMapMu.Unlock()
sub := &subscription{
topic: t,
qos: qoss[i],
client: c,
topic: topic,
qos: qoss[i],
client: c,
share: share,
groupName: groupName,
}
rqos, err := c.topicsMgr.Subscribe([]byte(topic), qoss[i], sub)
if err != nil {
return
log.Error("subscribe error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
retcodes = append(retcodes, QosFailure)
continue
}
c.subMap[topic] = sub
c.session.AddTopic(topic, qoss[i])
retcodes = append(retcodes, rqos)
c.topicsMgr.Retained([]byte(topic), &c.rmsgs)
c.subMapMu.Lock()
c.subMap[t] = sub
c.subMapMu.Unlock()
_ = c.session.AddTopic(t, qoss[i])
retcodes = append(retcodes, rqos)
_ = c.topicsMgr.Retained([]byte(topic), &c.rmsgs)
}
suback.ReturnCodes = retcodes
@@ -292,9 +623,7 @@ func (c *client) ProcessSubscribe(packet *packets.SubscribePacket) {
return
}
//broadcast subscribe message
if c.typ == CLIENT {
go b.BroadcastSubOrUnsubMessage(packet)
}
go b.BroadcastSubOrUnsubMessage(packet)
//process retain message
for _, rm := range c.rmsgs {
@@ -306,7 +635,82 @@ func (c *client) ProcessSubscribe(packet *packets.SubscribePacket) {
}
}
func (c *client) processRouterSubscribe(packet *packets.SubscribePacket) {
if c.status == Disconnected {
return
}
b := c.broker
if b == nil {
return
}
subTopics := packet.Topics
qoss := packet.Qoss
suback := packets.NewControlPacket(packets.Suback).(*packets.SubackPacket)
suback.MessageID = packet.MessageID
var retcodes []byte
for i, topic := range subTopics {
t := topic
groupName := ""
share := false
if strings.HasPrefix(topic, "$share/") {
substr := groupCompile.FindStringSubmatch(topic)
if len(substr) != 3 {
retcodes = append(retcodes, QosFailure)
continue
}
share = true
groupName = substr[1]
topic = substr[2]
}
sub := &subscription{
topic: topic,
qos: qoss[i],
client: c,
share: share,
groupName: groupName,
}
rqos, err := c.topicsMgr.Subscribe([]byte(topic), qoss[i], sub)
if err != nil {
log.Error("subscribe error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
retcodes = append(retcodes, QosFailure)
continue
}
c.subMapMu.Lock()
c.subMap[t] = sub
c.subMapMu.Unlock()
c.routeSubMapMu.Lock()
addSubMap(c.routeSubMap, topic)
c.routeSubMapMu.Unlock()
retcodes = append(retcodes, rqos)
}
suback.ReturnCodes = retcodes
err := c.WriterPacket(suback)
if err != nil {
log.Error("send suback error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
return
}
}
func (c *client) ProcessUnSubscribe(packet *packets.UnsubscribePacket) {
switch c.typ {
case CLIENT:
c.processClientUnSubscribe(packet)
case ROUTER:
c.processRouterUnSubscribe(packet)
}
}
func (c *client) processRouterUnSubscribe(packet *packets.UnsubscribePacket) {
if c.status == Disconnected {
return
}
@@ -314,16 +718,70 @@ func (c *client) ProcessUnSubscribe(packet *packets.UnsubscribePacket) {
if b == nil {
return
}
topics := packet.Topics
for _, topic := range topics {
t := []byte(topic)
sub, exist := c.subMap[topic]
if exist {
c.topicsMgr.Unsubscribe(t, sub)
c.session.RemoveTopic(topic)
unSubTopics := packet.Topics
for _, topic := range unSubTopics {
c.subMapMu.Lock()
if sub, exist := c.subMap[topic]; exist {
c.routeSubMapMu.Lock()
if retainNum := delSubMap(c.routeSubMap, topic); retainNum > 0 {
c.routeSubMapMu.Unlock()
c.subMapMu.Unlock()
continue
}
c.routeSubMapMu.Unlock()
_ = c.topicsMgr.Unsubscribe([]byte(sub.topic), sub)
delete(c.subMap, topic)
}
c.subMapMu.Unlock()
}
unsuback := packets.NewControlPacket(packets.Unsuback).(*packets.UnsubackPacket)
unsuback.MessageID = packet.MessageID
err := c.WriterPacket(unsuback)
if err != nil {
log.Error("send unsuback error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
return
}
}
func (c *client) processClientUnSubscribe(packet *packets.UnsubscribePacket) {
if c.status == Disconnected {
return
}
b := c.broker
if b == nil {
return
}
unSubTopics := packet.Topics
for _, topic := range unSubTopics {
{
//publish kafka
b.Publish(&bridge.Elements{
ClientID: c.info.clientID,
Username: c.info.username,
Action: bridge.Unsubscribe,
Timestamp: time.Now().Unix(),
Topic: topic,
})
}
c.subMapMu.Lock()
sub, exist := c.subMap[topic]
if exist {
_ = c.topicsMgr.Unsubscribe([]byte(sub.topic), sub)
_ = c.session.RemoveTopic(topic)
delete(c.subMap, topic)
}
c.subMapMu.Unlock()
}
unsuback := packets.NewControlPacket(packets.Unsuback).(*packets.UnsubackPacket)
@@ -335,9 +793,7 @@ func (c *client) ProcessUnSubscribe(packet *packets.UnsubscribePacket) {
return
}
// //process ubsubscribe message
if c.typ == CLIENT {
b.BroadcastSubOrUnsubMessage(packet)
}
b.BroadcastSubOrUnsubMessage(packet)
}
func (c *client) ProcessPing() {
@@ -361,41 +817,87 @@ func (c *client) Close() {
c.status = Disconnected
//wait for message complete
time.Sleep(1 * time.Second)
// time.Sleep(1 * time.Second)
// c.status = Disconnected
if c.conn != nil {
c.conn.Close()
c.conn = nil
}
b := c.broker
subs := c.subMap
if b != nil {
b.removeClient(c)
b.Publish(&bridge.Elements{
ClientID: c.info.clientID,
Username: c.info.username,
Action: bridge.Disconnect,
Timestamp: time.Now().Unix(),
})
if c.typ == CLIENT {
b.BroadcastUnSubscribe(subs)
//offline notification
b.OnlineOfflineNotification(c.info.clientID, false)
if c.mu.Lock(); c.conn != nil {
_ = c.conn.Close()
c.conn = nil
c.mu.Unlock()
}
if b == nil {
return
}
b.removeClient(c)
c.subMapMu.RLock()
defer c.subMapMu.RUnlock()
unSubTopics := make([]string, 0)
for topic, sub := range c.subMap {
unSubTopics = append(unSubTopics, topic)
// guard against race condition where a client gets Close() but wasn't initialized yet fully
if sub == nil || b.topicsMgr == nil {
continue
}
if c.info.willMsg != nil {
b.PublishMessage(c.info.willMsg)
}
if c.typ == CLUSTER {
b.ConnectToDiscovery()
}
//do reconnect
if c.typ == REMOTE {
go b.connectRouter(c.route.remoteID, c.route.remoteUrl)
if err := b.topicsMgr.Unsubscribe([]byte(sub.topic), sub); err != nil {
log.Error("unsubscribe error, ", zap.Error(err), zap.String("ClientID", c.info.clientID))
}
}
if c.typ == CLIENT {
b.BroadcastUnSubscribe(unSubTopics)
var pubPack = PubPacket{}
if c.info.willMsg != nil {
pubPack.TopicName = c.info.willMsg.TopicName
pubPack.Payload = c.info.willMsg.Payload
}
pubInfo := Info{
ClientID: c.info.clientID,
Username: c.info.username,
Password: c.info.password,
Keepalive: c.info.keepalive,
WillMsg: pubPack,
}
//offline notification
b.OnlineOfflineNotification(pubInfo, false, c.lastMsgTime)
}
if c.info.willMsg != nil {
b.PublishMessage(c.info.willMsg)
}
if c.typ == CLUSTER {
b.ConnectToDiscovery()
}
//do reconnect
if c.typ == REMOTE {
go b.connectRouter(c.route.remoteID, c.route.remoteUrl)
}
}
func (c *client) WriterPacket(packet packets.ControlPacket) error {
defer func() {
if err := recover(); err != nil {
log.Error("recover error, ", zap.Any("recover", err))
}
}()
if c.status == Disconnected {
return nil
}
@@ -409,7 +911,61 @@ func (c *client) WriterPacket(packet packets.ControlPacket) error {
}
c.mu.Lock()
err := packet.Write(c.conn)
c.mu.Unlock()
return err
defer c.mu.Unlock()
return packet.Write(c.conn)
}
func (c *client) registerPublishPacketId(packetId uint16) error {
if c.isAwaitingFull() {
log.Error("Dropped qos2 packet for too many awaiting_rel", zap.Uint16("id", packetId))
return errors.New("DROPPED_QOS2_PACKET_FOR_TOO_MANY_AWAITING_REL")
}
c.awaitingRelMu.Lock()
defer c.awaitingRelMu.Unlock()
if _, found := c.awaitingRel[packetId]; found {
return errors.New("RC_PACKET_IDENTIFIER_IN_USE")
}
c.awaitingRel[packetId] = time.Now().Unix()
time.AfterFunc(time.Duration(awaitRelTimeout)*time.Second, c.expireAwaitingRel)
return nil
}
func (c *client) isAwaitingFull() bool {
c.awaitingRelMu.RLock()
defer c.awaitingRelMu.RUnlock()
if c.maxAwaitingRel == 0 {
return false
}
if len(c.awaitingRel) < c.maxAwaitingRel {
return false
}
return true
}
func (c *client) expireAwaitingRel() {
c.awaitingRelMu.Lock()
defer c.awaitingRelMu.Unlock()
if len(c.awaitingRel) == 0 {
return
}
now := time.Now().Unix()
for packetId, Timestamp := range c.awaitingRel {
if now-Timestamp >= awaitRelTimeout {
log.Error("Dropped qos2 packet for await_rel_timeout", zap.Uint16("id", packetId))
delete(c.awaitingRel, packetId)
}
}
}
func (c *client) pubRel(packetId uint16) error {
c.awaitingRelMu.Lock()
defer c.awaitingRelMu.Unlock()
if _, found := c.awaitingRel[packetId]; found {
delete(c.awaitingRel, packetId)
} else {
log.Error("The PUBREL PacketId is not found", zap.Uint16("id", packetId))
return errors.New("RC_PACKET_IDENTIFIER_NOT_FOUND")
}
return nil
}

View File

@@ -1,15 +1,14 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package broker
import (
"crypto/md5"
"crypto/rand"
"encoding/base64"
"encoding/hex"
"io"
"reflect"
"time"
jsoniter "github.com/json-iterator/go"
"go.uber.org/zap"
"github.com/eclipse/paho.mqtt.golang/packets"
uuid "github.com/google/uuid"
)
const (
@@ -91,13 +90,151 @@ func equal(k1, k2 interface{}) bool {
return false
}
func GenUniqueId() string {
b := make([]byte, 48)
if _, err := io.ReadFull(rand.Reader, b); err != nil {
return ""
func addSubMap(m map[string]uint64, topic string) {
subNum, exist := m[topic]
if exist {
m[topic] = subNum + 1
} else {
m[topic] = 1
}
h := md5.New()
h.Write([]byte(base64.URLEncoding.EncodeToString(b)))
return hex.EncodeToString(h.Sum(nil))
// return GetMd5String()
}
func delSubMap(m map[string]uint64, topic string) uint64 {
subNum, exist := m[topic]
if exist {
if subNum > 1 {
m[topic] = subNum - 1
return subNum - 1
}
} else {
m[topic] = 0
}
return 0
}
func GenUniqueId() string {
id, err := uuid.NewRandom()
if err != nil {
log.Error("uuid.NewRandom() returned an error: " + err.Error())
}
return id.String()
}
func wrapPublishPacket(packet *packets.PublishPacket) *packets.PublishPacket {
p := packet.Copy()
wrapPayload := map[string]interface{}{
"message_id": GenUniqueId(),
"payload": string(p.Payload),
}
b, _ := json.Marshal(wrapPayload)
p.Payload = b
return p
}
func unWrapPublishPacket(packet *packets.PublishPacket) *packets.PublishPacket {
p := packet.Copy()
if payload := jsoniter.Get(p.Payload, "payload").ToString(); payload != "" {
p.Payload = []byte(payload)
}
return p
}
func publish(sub *subscription, packet *packets.PublishPacket) {
switch packet.Qos {
case QosAtMostOnce:
err := sub.client.WriterPacket(packet)
if err != nil {
log.Error("process message for psub error, ", zap.Error(err))
}
case QosAtLeastOnce, QosExactlyOnce:
sub.client.inflightMu.Lock()
sub.client.inflight[packet.MessageID] = &inflightElem{status: Publish, packet: packet, timestamp: time.Now().Unix()}
sub.client.inflightMu.Unlock()
err := sub.client.WriterPacket(packet)
if err != nil {
log.Error("process message for psub error, ", zap.Error(err))
}
sub.client.ensureRetryTimer()
default:
log.Error("publish with unknown qos", zap.String("ClientID", sub.client.info.clientID))
return
}
}
// timer for retry delivery
func (c *client) ensureRetryTimer(interval ...int64) {
c.retryTimerLock.Lock()
defer c.retryTimerLock.Unlock()
if c.retryTimer != nil {
return
}
if len(interval) > 1 {
return
}
timerInterval := retryInterval
if len(interval) == 1 {
timerInterval = interval[0]
}
c.retryTimer = time.AfterFunc(time.Duration(timerInterval)*time.Second, c.retryDelivery)
return
}
func (c *client) resetRetryTimer() {
// lock mutex before reading retryTimer
c.retryTimerLock.Lock()
defer c.retryTimerLock.Unlock()
if c.retryTimer == nil {
return
}
// reset timer
c.retryTimer = nil
}
func (c *client) retryDelivery() {
c.resetRetryTimer()
c.inflightMu.RLock()
ilen := len(c.inflight)
if c.mu.Lock(); c.conn == nil || ilen == 0 { //Reset timer when client offline OR inflight is empty
c.inflightMu.RUnlock()
c.mu.Unlock()
return
}
// copy the to be retried elements out of the map to only hold the lock for a short time and use the new slice later to iterate
// through them
toRetryEle := make([]*inflightElem, 0, ilen)
for _, infEle := range c.inflight {
toRetryEle = append(toRetryEle, infEle)
}
c.inflightMu.RUnlock()
now := time.Now().Unix()
for _, infEle := range toRetryEle {
age := now - infEle.timestamp
if age >= retryInterval {
if infEle.status == Publish {
c.WriterPacket(infEle.packet)
infEle.timestamp = now
} else if infEle.status == Pubrel {
pubrel := packets.NewControlPacket(packets.Pubrel).(*packets.PubrelPacket)
pubrel.MessageID = infEle.packet.MessageID
c.WriterPacket(pubrel)
infEle.timestamp = now
}
} else {
if age < 0 {
age = 0
}
c.ensureRetryTimer(retryInterval - age)
}
}
c.ensureRetryTimer()
}

View File

@@ -1,11 +1,8 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package broker
import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"flag"
"fmt"
@@ -13,24 +10,41 @@ import (
"os"
"github.com/fhmq/hmq/logger"
"github.com/fhmq/hmq/plugins/auth"
"github.com/fhmq/hmq/plugins/bridge"
jsoniter "github.com/json-iterator/go"
"go.uber.org/zap"
)
var json = jsoniter.ConfigCompatibleWithStandardLibrary
type Config struct {
Worker int `json:"workerNum"`
Host string `json:"host"`
Port string `json:"port"`
Cluster RouteInfo `json:"cluster"`
Router string `json:"router"`
TlsHost string `json:"tlsHost"`
TlsPort string `json:"tlsPort"`
WsPath string `json:"wsPath"`
WsPort string `json:"wsPort"`
WsTLS bool `json:"wsTLS"`
TlsInfo TLSInfo `json:"tlsInfo"`
Acl bool `json:"acl"`
AclConf string `json:"aclConf"`
Debug bool `json:"-"`
Worker int `json:"workerNum"`
HTTPPort string `json:"httpPort"`
Host string `json:"host"`
Port string `json:"port"`
Cluster RouteInfo `json:"cluster"`
Router string `json:"router"`
TlsHost string `json:"tlsHost"`
TlsPort string `json:"tlsPort"`
WsPath string `json:"wsPath"`
WsPort string `json:"wsPort"`
WsTLS bool `json:"wsTLS"`
TlsInfo TLSInfo `json:"tlsInfo"`
Debug bool `json:"debug"`
Plugin Plugins `json:"plugins"`
UnixFilePath string `json:"unixFilePath"`
WindowsPipeName string `json:"windowsPipeName"`
}
type Plugins struct {
Auth auth.Auth
Bridge bridge.BridgeMQ
}
type NamedPlugins struct {
Auth string
Bridge string
}
type RouteInfo struct {
@@ -49,11 +63,10 @@ var DefaultConfig *Config = &Config{
Worker: 4096,
Host: "0.0.0.0",
Port: "1883",
Acl: false,
}
var (
log *zap.Logger
log = logger.Prod().Named("broker")
)
func showHelp() {
@@ -74,8 +87,11 @@ func ConfigureConfig(args []string) (*Config, error) {
fs.BoolVar(&help, "help", false, "Show this message.")
fs.IntVar(&config.Worker, "w", 1024, "worker num to process message, perfer (client num)/10.")
fs.IntVar(&config.Worker, "worker", 1024, "worker num to process message, perfer (client num)/10.")
fs.StringVar(&config.Port, "port", "1883", "Port to listen on.")
fs.StringVar(&config.Port, "p", "1883", "Port to listen on.")
fs.StringVar(&config.HTTPPort, "httpport", "8080", "Port to listen on.")
fs.StringVar(&config.HTTPPort, "hp", "8080", "Port to listen on.")
fs.StringVar(&config.Port, "port", "", "Port to listen on.")
fs.StringVar(&config.Port, "p", "", "Port to listen on.")
fs.StringVar(&config.UnixFilePath, "unixfilepath", "", "unix sock to listen on.")
fs.StringVar(&config.Host, "host", "0.0.0.0", "Network host to listen on")
fs.StringVar(&config.Cluster.Port, "cp", "", "Cluster port from which members can connect.")
fs.StringVar(&config.Cluster.Port, "clusterport", "", "Cluster port from which members can connect.")
@@ -108,9 +124,6 @@ func ConfigureConfig(args []string) (*Config, error) {
}
})
logger.InitLogger(config.Debug)
log = logger.Get().Named("Broker")
if configFile != "" {
tmpConfig, e := LoadConfig(configFile)
if e != nil {
@@ -120,6 +133,10 @@ func ConfigureConfig(args []string) (*Config, error) {
}
}
if config.Debug {
log = logger.Debug().Named("broker")
}
if err := config.check(); err != nil {
return nil, err
}
@@ -132,7 +149,7 @@ func LoadConfig(filename string) (*Config, error) {
content, err := ioutil.ReadFile(filename)
if err != nil {
log.Error("Read config file error: ", zap.Error(err))
// log.Error("Read config file error: ", zap.Error(err))
return nil, err
}
// log.Info(string(content))
@@ -140,13 +157,24 @@ func LoadConfig(filename string) (*Config, error) {
var config Config
err = json.Unmarshal(content, &config)
if err != nil {
log.Error("Unmarshal config file error: ", zap.Error(err))
// log.Error("Unmarshal config file error: ", zap.Error(err))
return nil, err
}
return &config, nil
}
func (p *Plugins) UnmarshalJSON(b []byte) error {
var named NamedPlugins
err := json.Unmarshal(b, &named)
if err != nil {
return err
}
p.Auth = auth.NewAuth(named.Auth)
p.Bridge = bridge.NewBridgeMQ(named.Bridge)
return nil
}
func (config *Config) check() error {
if config.Worker == 0 {
@@ -211,7 +239,7 @@ func NewTLSConfig(tlsInfo TLSInfo) (*tls.Config, error) {
return nil, err
}
pool := x509.NewCertPool()
ok := pool.AppendCertsFromPEM([]byte(rootPEM))
ok := pool.AppendCertsFromPEM(rootPEM)
if !ok {
return nil, fmt.Errorf("failed to parse root ca certificate")
}

65
broker/http.go Normal file
View File

@@ -0,0 +1,65 @@
package broker
import (
"github.com/gin-gonic/gin"
)
const (
CONNECTIONS = "api/v1/connections"
)
type ConnClient struct {
Info `json:"info"`
LastMsgTime int64 `json:"lastMsg"`
}
type resp struct {
Code int `json:"code,omitempty"`
Clients []ConnClient `json:"clients,omitempty"`
}
func InitHTTPMoniter(b *Broker) {
gin.SetMode(gin.ReleaseMode)
router := gin.Default()
router.DELETE(CONNECTIONS + "/:clientid", func(c *gin.Context) {
clientid := c.Param("clientid")
cli, ok := b.clients.Load(clientid)
if ok {
conn, success := cli.(*client)
if success {
conn.Close()
}
}
r := resp{Code: 0}
c.JSON(200, &r)
})
router.GET(CONNECTIONS, func(c *gin.Context) {
conns := make([]ConnClient, 0)
b.clients.Range(func (k, v interface{}) bool {
cl, _ := v.(*client)
var pubPack = PubPacket{}
if cl.info.willMsg != nil {
pubPack.TopicName = cl.info.willMsg.TopicName
pubPack.Payload = cl.info.willMsg.Payload
}
msg := ConnClient{
Info: Info{
ClientID: cl.info.clientID,
Username: cl.info.username,
Password: cl.info.password,
Keepalive: cl.info.keepalive,
WillMsg: pubPack,
},
LastMsgTime: cl.lastMsgTime,
}
conns = append(conns, msg)
return true
})
r := resp{Clients: conns}
c.JSON(200, &r)
})
router.Run(":" + b.config.HTTPPort)
}

View File

@@ -1,5 +1,3 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package broker
import (
@@ -17,7 +15,7 @@ func (c *client) SendInfo() {
}
url := c.info.localIP + ":" + c.broker.config.Cluster.Port
infoMsg := NewInfo(c.broker.id, url, false)
infoMsg := NewInfo(c.broker.id, url)
err := c.WriterPacket(infoMsg)
if err != nil {
log.Error("send info message error, ", zap.Error(err))
@@ -48,6 +46,8 @@ func (c *client) SendConnect() {
return
}
m := packets.NewControlPacket(packets.Connect).(*packets.ConnectPacket)
m.ProtocolName = "MQIsdp"
m.ProtocolVersion = 3
m.CleanSession = true
m.ClientIdentifier = c.info.clientID
@@ -60,13 +60,12 @@ func (c *client) SendConnect() {
log.Info("send connect success")
}
func NewInfo(sid, url string, isforword bool) *packets.PublishPacket {
func NewInfo(sid, url string) *packets.PublishPacket {
pub := packets.NewControlPacket(packets.Publish).(*packets.PublishPacket)
pub.Qos = 0
pub.TopicName = BrokerInfoTopic
pub.Retain = false
info := fmt.Sprintf(`{"brokerID":"%s","brokerUrl":"%s"}`, sid, url)
// log.Info("new info", string(info))
pub.Payload = []byte(info)
return pub
}

View File

@@ -55,7 +55,7 @@ func (this *Session) Init(msg *packets.ConnectPacket) error {
this.topics = make(map[string]byte, 1)
this.id = string(msg.ClientIdentifier)
this.id = msg.ClientIdentifier
this.initted = true

View File

@@ -78,7 +78,7 @@ func (this *memTopics) Unsubscribe(topic []byte, sub interface{}) error {
return this.sroot.sremove(topic, sub)
}
// Returned values will be invalidated by the next Subscribers call
// Subscribers Returned values will be invalidated by the next Subscribers call
func (this *memTopics) Subscribers(topic []byte, qos byte, subs *[]interface{}, qoss *[]byte) error {
if !ValidQos(qos) {
return fmt.Errorf("Invalid QoS %d", qos)
@@ -104,7 +104,7 @@ func (this *memTopics) Retain(msg *packets.PublishPacket) error {
return this.rroot.rremove([]byte(msg.TopicName))
}
return this.rroot.rinsert([]byte(msg.TopicName), msg)
return this.rroot.rinsertOrUpdate([]byte(msg.TopicName), msg)
}
func (this *memTopics) Retained(topic []byte, msgs *[]*packets.PublishPacket) error {
@@ -244,6 +244,9 @@ func (this *snode) smatch(topic []byte, qos byte, subs *[]interface{}, qoss *[]b
// let's find the subscribers that match the qos and append them to the list.
if len(topic) == 0 {
this.matchQos(qos, subs, qoss)
if mwcn, _ := this.snodes[MWC]; mwcn != nil {
mwcn.matchQos(qos, subs, qoss)
}
return nil
}
@@ -283,13 +286,11 @@ func newRNode() *rnode {
}
}
func (this *rnode) rinsert(topic []byte, msg *packets.PublishPacket) error {
func (this *rnode) rinsertOrUpdate(topic []byte, msg *packets.PublishPacket) error {
// If there's no more topic levels, that means we are at the matching rnode.
if len(topic) == 0 {
// Reuse the message if possible
if this.msg == nil {
this.msg = msg
}
this.msg = msg
return nil
}
@@ -312,7 +313,7 @@ func (this *rnode) rinsert(topic []byte, msg *packets.PublishPacket) error {
this.rnodes[level] = n
}
return n.rinsert(rem, msg)
return n.rinsertOrUpdate(rem, msg)
}
// Remove the retained message for the supplied topic

View File

@@ -0,0 +1,11 @@
package broker
import (
"fmt"
)
// StartPipeSocketListening We use the open source npipe library
// to jump over pipe communication in mac
func (b *Broker) StartPipeSocketListening(pipeName string, usePipe bool) {
fmt.Println("macos system")
}

View File

@@ -0,0 +1,6 @@
package broker
// StartPipeSocketListening We use the open source npipe library to
// jump over pipe communication in linux
func (b *Broker) StartPipeSocketListening(pipeName string, usePipe bool) {
}

View File

@@ -0,0 +1,61 @@
package broker
import (
"fmt"
"github.com/natefinch/npipe"
"go.uber.org/zap"
"net"
"time"
)
// StartPipeSocketListening We use the open source npipe library to support pipe communication in windows
func (b *Broker) StartPipeSocketListening(pipeName string, usePipe bool) {
var err error
var ln *npipe.PipeListener
for {
if usePipe {
fmt.Println(pipeName)
ln, err = npipe.Listen(pipeName)
log.Info("Start Listening client on ", zap.String("pipeName", pipeName))
}
if err == nil {
break // successfully listening
}
log.Error("Error listening on ", zap.Error(err))
time.Sleep(1 * time.Second)
}
tmpDelay := 10 * ACCEPT_MIN_SLEEP
for {
conn, err := ln.Accept()
if err != nil {
if ne, ok := err.(net.Error); ok && ne.Temporary() {
log.Error(
"Temporary Client Accept Error(%v), sleeping %dms",
zap.Error(ne),
zap.Duration("sleeping", tmpDelay/time.Millisecond),
)
time.Sleep(tmpDelay)
tmpDelay *= 2
if tmpDelay > ACCEPT_MAX_SLEEP {
tmpDelay = ACCEPT_MAX_SLEEP
}
} else {
log.Error("Accept error", zap.Error(err))
}
continue
}
tmpDelay = ACCEPT_MIN_SLEEP
go func() {
err := b.handleConnection(CLIENT, conn)
fmt.Println("handleConnection,", err)
if err != nil {
conn.Close()
}
}()
}
}

View File

@@ -1,5 +1,3 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package broker
import "github.com/eclipse/paho.mqtt.golang/packets"

View File

@@ -6,7 +6,7 @@
"host": "0.0.0.0",
"port": "1993"
},
"router": "127.0.0.1:9888",
"httpPort": "8080",
"tlsPort": "8883",
"tlsHost": "0.0.0.0",
"wsPort": "1888",
@@ -18,6 +18,8 @@
"certFile": "ssl/server/cert.pem",
"keyFile": "ssl/server/key.pem"
},
"acl": false,
"aclConf": "conf/acl.conf"
"plugins": {
"auth": "mock",
"bridge": "csvlog"
}
}

37
deploy/config.yaml Normal file
View File

@@ -0,0 +1,37 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: mqtt-broker
data:
hmq.config: |
{
"workerNum": 4096,
"port": "1883",
"host": "0.0.0.0",
"plugins": ["authhttp","kafka"]
}
kafka.json: |
{
"addr": [
"127.0.0.1:9090"
],
"onConnect": "onConnect",
"onPublish": "onPublish",
"onSubscribe": "onSubscribe",
"onDisconnect": "onDisconnect",
"onUnsubscribe": "onUnsubscribe",
"deliverMap": {
"#": "publish",
"/upload/+/#": "upload"
}
}
authhttp.json: |
{
"auth": "http://127.0.0.1:9090/mqtt/auth",
"acl": "http://127.0.0.1:9090/mqtt/acl",
"super": "http://127.0.0.1:9090/mqtt/superuser"
}

44
deploy/deploy.yaml Normal file
View File

@@ -0,0 +1,44 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
spec:
selector:
matchLabels:
app: mqtt-broker
replicas: 1
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: hmq:v0.1.0
ports:
- containerPort: 1883
- containerPort: 8080
volumeMounts:
- name: mqtt-broker
mountPath: /conf
subPath: hmq.config
- name: mqtt-broker
mountPath: /plugins/kafka/kafka.json
subPath: kafka.json
- name: mqtt-broker
mountPath: /plugins/authttp/http.json
subPath: kafka.json
volumes:
- name: mqtt-broker
configMap:
name: mqtt-broker
items:
- key: hmq.config
path: hmq.config
items:
- key: http.json
path: http.json
items:
- key: kafka.json
path: kafka.json

13
deploy/svc.yaml Normal file
View File

@@ -0,0 +1,13 @@
kind: Service
apiVersion: v1
metadata:
name: mqtt-broker
spec:
selector:
app: mqtt-broker
ports:
- protocol: TCP
port: 1883
targetPort: 1883
type: ClusterIP
sessionAffinity: ClientIP

65
go.mod Normal file
View File

@@ -0,0 +1,65 @@
module github.com/fhmq/hmq
go 1.21
require (
github.com/Shopify/sarama v1.38.1
github.com/bitly/go-simplejson v0.5.0
github.com/cespare/xxhash/v2 v2.1.2
github.com/eapache/queue v1.1.0
github.com/eclipse/paho.mqtt.golang v1.4.2
github.com/gin-gonic/gin v1.9.1
github.com/google/uuid v1.3.0
github.com/json-iterator/go v1.1.12
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/stretchr/testify v1.8.3
go.uber.org/zap v1.24.0
golang.org/x/net v0.23.0
)
require (
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 // indirect
github.com/bytedance/sonic v1.9.1 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/eapache/go-resiliency v1.3.0 // indirect
github.com/eapache/go-xerial-snappy v0.0.0-20230111030713-bf00bc1b83b6 // indirect
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.14.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-uuid v1.0.3 // indirect
github.com/jcmturner/aescts/v2 v2.0.0 // indirect
github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect
github.com/jcmturner/gofork v1.7.6 // indirect
github.com/jcmturner/gokrb5/v8 v8.4.3 // indirect
github.com/jcmturner/rpc/v2 v2.0.3 // indirect
github.com/klauspost/compress v1.15.14 // indirect
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.2.4 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/natefinch/npipe v0.0.0-20160621034901-c1b8fa8bdcce // indirect
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
github.com/pierrec/lz4/v4 v4.1.17 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
github.com/rogpeppe/go-internal v1.12.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.11 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
golang.org/x/arch v0.3.0 // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/text v0.14.0 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

183
go.sum Normal file
View File

@@ -0,0 +1,183 @@
github.com/Shopify/sarama v1.38.1 h1:lqqPUPQZ7zPqYlWpTh+LQ9bhYNu2xJL6k1SJN4WVe2A=
github.com/Shopify/sarama v1.38.1/go.mod h1:iwv9a67Ha8VNa+TifujYoWGxWnu2kNVAQdSdZ4X2o5g=
github.com/Shopify/toxiproxy/v2 v2.5.0 h1:i4LPT+qrSlKNtQf5QliVjdP08GyAH8+BUIc9gT0eahc=
github.com/Shopify/toxiproxy/v2 v2.5.0/go.mod h1:yhM2epWtAmel9CB8r2+L+PCmhH6yH2pITaPAo7jxJl0=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/bitly/go-simplejson v0.5.0 h1:6IH+V8/tVMab511d5bn4M7EwGXZf9Hj6i2xSwkNEM+Y=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/bytedance/sonic v1.5.0/go.mod h1:ED5hyg4y6t3/9Ku1R6dU/4KyJ48DZ4jPhfY1O2AihPM=
github.com/bytedance/sonic v1.9.1 h1:6iJ6NqdoxCDr6mbY8h18oSO+cShGSMRGCEo7F2h0x8s=
github.com/bytedance/sonic v1.9.1/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZXU064P/U=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/eapache/go-resiliency v1.3.0 h1:RRL0nge+cWGlxXbUzJ7yMcq6w2XBEr19dCN6HECGaT0=
github.com/eapache/go-resiliency v1.3.0/go.mod h1:5yPzW0MIvSe0JDsv0v+DvcjEv2FyD6iZYSs1ZI+iQho=
github.com/eapache/go-xerial-snappy v0.0.0-20230111030713-bf00bc1b83b6 h1:8yY/I9ndfrgrXUbOGObLHKBR4Fl3nZXwM2c7OYTT8hM=
github.com/eapache/go-xerial-snappy v0.0.0-20230111030713-bf00bc1b83b6/go.mod h1:YvSRo5mw33fLEx1+DlK6L2VV43tJt5Eyel9n9XBcR+0=
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/eclipse/paho.mqtt.golang v1.4.2 h1:66wOzfUHSSI1zamx7jR6yMEI5EuHnT1G6rNA5PM12m4=
github.com/eclipse/paho.mqtt.golang v1.4.2/go.mod h1:JGt0RsEwEX+Xa/agj90YJ9d9DH2b7upDZMK9HRbFvCA=
github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.14.0 h1:vgvQWe3XCz3gIeFDm/HnTIbj6UGmg/+t63MyGU2n5js=
github.com/go-playground/validator/v10 v10.14.0/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/jcmturner/aescts/v2 v2.0.0 h1:9YKLH6ey7H4eDBXW8khjYslgyqG2xZikXP0EQFKrle8=
github.com/jcmturner/aescts/v2 v2.0.0/go.mod h1:AiaICIRyfYg35RUkr8yESTqvSy7csK90qZ5xfvvsoNs=
github.com/jcmturner/dnsutils/v2 v2.0.0 h1:lltnkeZGL0wILNvrNiVCR6Ro5PGU/SeBvVO/8c/iPbo=
github.com/jcmturner/dnsutils/v2 v2.0.0/go.mod h1:b0TnjGOvI/n42bZa+hmXL+kFJZsFT7G4t3HTlQ184QM=
github.com/jcmturner/gofork v1.7.6 h1:QH0l3hzAU1tfT3rZCnW5zXl+orbkNMMRGJfdJjHVETg=
github.com/jcmturner/gofork v1.7.6/go.mod h1:1622LH6i/EZqLloHfE7IeZ0uEJwMSUyQ/nDd82IeqRo=
github.com/jcmturner/goidentity/v6 v6.0.1 h1:VKnZd2oEIMorCTsFBnJWbExfNN7yZr3EhJAxwOkZg6o=
github.com/jcmturner/goidentity/v6 v6.0.1/go.mod h1:X1YW3bgtvwAXju7V3LCIMpY0Gbxyjn/mY9zx4tFonSg=
github.com/jcmturner/gokrb5/v8 v8.4.3 h1:iTonLeSJOn7MVUtyMT+arAn5AKAPrkilzhGw8wE/Tq8=
github.com/jcmturner/gokrb5/v8 v8.4.3/go.mod h1:dqRwJGXznQrzw6cWmyo6kH+E7jksEQG/CyVWsJEsJO0=
github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY=
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/klauspost/compress v1.15.14 h1:i7WCKDToww0wA+9qrUZ1xOjp218vfFo3nTU6UHp+gOc=
github.com/klauspost/compress v1.15.14/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk=
github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/natefinch/npipe v0.0.0-20160621034901-c1b8fa8bdcce h1:TqjP/BTDrwN7zP9xyXVuLsMBXYMt6LLYi55PlrIcq8U=
github.com/natefinch/npipe v0.0.0-20160621034901-c1b8fa8bdcce/go.mod h1:ifHPsLndGGzvgzcaXUvzmt6LxKT4pJ+uzEhtnMt+f7A=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZR9tGQ=
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
github.com/pierrec/lz4/v4 v4.1.17 h1:kV4Ip+/hUBC+8T6+2EgburRtkE9ef4nbY3f4dFhGjMc=
github.com/pierrec/lz4/v4 v4.1.17/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.3 h1:RP3t2pwF7cMEbC1dqtB6poj3niw/9gnV4Cjg5oW5gtY=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI=
go.uber.org/goleak v1.1.11/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200425230154-ff2c4b7c35a0/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220725212005-46097bf591d3/go.mod h1:AaygXjzTFtRAg2ttMY5RMuhpJ3cNnI0XpyFJD1iQRSM=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@@ -1,95 +0,0 @@
package sessions
import (
"time"
log "github.com/cihub/seelog"
"github.com/go-redis/redis"
jsoniter "github.com/json-iterator/go"
)
var redisClient *redis.Client
var _ SessionsProvider = (*redisProvider)(nil)
const (
sessionName = "session"
)
type redisProvider struct {
}
func init() {
Register("redis", NewRedisProvider())
}
func InitRedisConn(url string) {
redisClient = redis.NewClient(&redis.Options{
Addr: "127.0.0.1:6379",
Password: "", // no password set
DB: 0, // use default DB
})
err := redisClient.Ping().Err()
for err != nil {
log.Error("connect redis error: ", err, " 3s try again...")
time.Sleep(3 * time.Second)
err = redisClient.Ping().Err()
}
}
func NewRedisProvider() *redisProvider {
return &redisProvider{}
}
func (r *redisProvider) New(id string) (*Session, error) {
val, _ := jsoniter.Marshal(&Session{id: id})
err := redisClient.HSet(sessionName, id, val).Err()
if err != nil {
return nil, err
}
result, err := redisClient.HGet(sessionName, id).Bytes()
if err != nil {
return nil, err
}
sess := Session{}
err = jsoniter.Unmarshal(result, &sess)
if err != nil {
return nil, err
}
return &sess, nil
}
func (r *redisProvider) Get(id string) (*Session, error) {
result, err := redisClient.HGet(sessionName, id).Bytes()
if err != nil {
return nil, err
}
sess := Session{}
err = jsoniter.Unmarshal(result, &sess)
if err != nil {
return nil, err
}
return &sess, nil
}
func (r *redisProvider) Del(id string) {
redisClient.HDel(sessionName, id)
}
func (r *redisProvider) Save(id string) error {
return nil
}
func (r *redisProvider) Count() int {
return int(redisClient.HLen(sessionName).Val())
}
func (r *redisProvider) Close() error {
return redisClient.Del(sessionName).Err()
}

View File

@@ -5,17 +5,27 @@ package logger
import (
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
var (
// env can be setup at build time with Go Linker. Value could be prod or whatever else for dev env
instance *zap.Logger
logCfg zap.Config
instance *zap.Logger
logCfg zap.Config
encoderCfg = zap.NewProductionEncoderConfig()
)
func init() {
encoderCfg.TimeKey = "timestamp"
encoderCfg.EncodeTime = zapcore.ISO8601TimeEncoder
}
// NewDevLogger return a logger for dev builds
func NewDevLogger() (*zap.Logger, error) {
logCfg := zap.NewDevelopmentConfig()
logCfg := zap.NewProductionConfig()
logCfg.Level = zap.NewAtomicLevelAt(zap.DebugLevel)
// logCfg.DisableStacktrace = true
logCfg.EncoderConfig = encoderCfg
return logCfg.Build()
}
@@ -24,27 +34,31 @@ func NewProdLogger() (*zap.Logger, error) {
logCfg := zap.NewProductionConfig()
logCfg.DisableStacktrace = true
logCfg.Level = zap.NewAtomicLevelAt(zap.InfoLevel)
logCfg.EncoderConfig = encoderCfg
return logCfg.Build()
}
func InitLogger(debug bool) {
var err error
var log *zap.Logger
if debug {
log, err = NewDevLogger()
} else {
log, err = NewProdLogger()
}
if err != nil {
panic("Unable to create a logger.")
}
defer log.Sync()
func Prod() *zap.Logger {
log.Debug("Logger initialization succeeded")
instance = log.Named("hmq")
}
l, _ := NewProdLogger()
instance = l
return instance
}
func Debug() *zap.Logger {
l, _ := NewDevLogger()
instance = l
return instance
}
func Get() *zap.Logger {
if instance == nil {
l, _ := NewProdLogger()
instance = l
}
// Get return a *zap.Logger instance
func Get() *zap.Logger {
return instance
}

View File

@@ -1,5 +1,6 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
/*
Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package logger
import (

19
main.go
View File

@@ -1,35 +1,30 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
*/
package main
import (
"log"
"os"
"os/signal"
"runtime"
"github.com/fhmq/hmq/broker"
"github.com/fhmq/hmq/logger"
"go.uber.org/zap"
)
var log = logger.Get()
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
config, err := broker.ConfigureConfig(os.Args[1:])
if err != nil {
log.Fatal("configure broker config error: ", err)
log.Fatal("configure broker config error", zap.Error(err))
}
b, err := broker.NewBroker(config)
if err != nil {
log.Fatal("New Broker error: ", err)
log.Fatal("New Broker error: ", zap.Error(err))
}
b.Start()
s := waitForSignal()
log.Println("signal received, broker closed.", s)
log.Info("signal received, broker closed.", zap.Any("signal", s))
}
func waitForSignal() os.Signal {

27
plugins/auth/auth.go Normal file
View File

@@ -0,0 +1,27 @@
package auth
import (
authfile "github.com/fhmq/hmq/plugins/auth/authfile"
"github.com/fhmq/hmq/plugins/auth/authhttp"
)
const (
AuthHTTP = "authhttp"
AuthFile = "authfile"
)
type Auth interface {
CheckACL(action, clientID, username, ip, topic string) bool
CheckConnect(clientID, username, password string) bool
}
func NewAuth(name string) Auth {
switch name {
case AuthHTTP:
return authhttp.Init()
case AuthFile:
return authfile.Init()
default:
return &mockAuth{}
}
}

View File

@@ -0,0 +1,54 @@
## ACL Configure
```
Attention: Acl Type Change, change `pub =1, sub=2` to `sub =1, pub=2`
```
#### The ACL rules define:
~~~
Allow | type | value | pubsub | Topics
~~~
#### ACL Config
~~~
## type clientid , username, ipaddr
##sub 1 , pub 2, pubsub 3
## %c is clientid , %u is username
allow ip 127.0.0.1 2 $SYS/#
allow clientid 0001 3 #
allow username admin 3 #
allow username joy 3 /test,hello/world
allow clientid * 1 toCloud/%c
allow username * 1 toCloud/%u
deny clientid * 3 #
~~~
~~~
#allow local sub $SYS topic
allow ip 127.0.0.1 1 $SYS/#
~~~
~~~
#allow client who's id with 0001 or username with admin pub sub all topic
allow clientid 0001 3 #
allow username admin 3 #
~~~
~~~
#allow client with the username joy can pub sub topic '/test' and 'hello/world'
allow username joy 3 /test,hello/world
~~~
~~~
#allow all client pub the topic toCloud/{clientid/username}
allow clientid * 2 toCloud/%c
allow username * 2 toCloud/%u
~~~
~~~
#deny all client pub sub all topic
deny clientid * 3 #
~~~
Client match acl rule one by one
~~~
--------- --------- ---------
Client -> | Rule1 | --nomatch--> | Rule2 | --nomatch--> | Rule3 | -->
--------- --------- ---------
| | |
match match match
\|/ \|/ \|/
allow | deny allow | deny allow | deny
~~~

View File

@@ -1,4 +1,4 @@
## pub 1 , sub 2, pubsub 3
## sub 1 , pub 2, pubsub 3
## %c is clientid , %s is username
##auth type value pub/sub topic
allow ip 127.0.0.1 2 $SYS/#
@@ -9,4 +9,4 @@ allow clientid * 1 toCloud/%c
allow username * 1 toCloud/%u
allow clientid * 2 toDevice/%c
allow username * 2 toDevice/%u
deny clientid * 3 #
deny clientid * 3 #

View File

@@ -0,0 +1,23 @@
package acl
type aclAuth struct {
config *ACLConfig
}
func Init() *aclAuth {
aclConfig, err := AclConfigLoad("./plugins/auth/authfile/acl.conf")
if err != nil {
panic(err)
}
return &aclAuth{
config: aclConfig,
}
}
func (a *aclAuth) CheckConnect(clientID, username, password string) bool {
return true
}
func (a *aclAuth) CheckACL(action, clientID, username, ip, topic string) bool {
return checkTopicAuth(a.config, action, ip, username, clientID, topic)
}

View File

@@ -0,0 +1,23 @@
//+build test
package acl
import (
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func TestOrigAcls(t *testing.T) {
pwd, _ := os.Getwd()
os.Chdir("../../../")
aclOrig := Init()
os.Chdir(pwd)
// rule: allow ip 127.0.0.1 2 $SYS/#
origAllowed := aclOrig.CheckACL(PUB, "dummyClientID", "dummyUser", "127.0.0.1", "$SYS/something")
assert.True(t, origAllowed)
origAllowed = aclOrig.CheckACL(SUB, "dummyClientID", "dummyUser", "127.0.0.1", "$SYS/something")
assert.False(t, origAllowed)
}

View File

@@ -1,22 +1,21 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>*/
package acl
import "strings"
func CheckTopicAuth(ACLInfo *ACLConfig, typ int, ip, username, clientid, topic string) bool {
func checkTopicAuth(ACLInfo *ACLConfig, action, ip, username, clientid, topic string) bool {
for _, info := range ACLInfo.Info {
ctyp := info.Typ
switch ctyp {
case CLIENTID:
if match, auth := info.checkWithClientID(typ, clientid, topic); match {
if match, auth := info.checkWithClientID(action, clientid, topic); match {
return auth
}
case USERNAME:
if match, auth := info.checkWithUsername(typ, username, topic); match {
if match, auth := info.checkWithUsername(action, username, topic); match {
return auth
}
case IP:
if match, auth := info.checkWithIP(typ, ip, topic); match {
if match, auth := info.checkWithIP(action, ip, topic); match {
return auth
}
}
@@ -24,18 +23,18 @@ func CheckTopicAuth(ACLInfo *ACLConfig, typ int, ip, username, clientid, topic s
return false
}
func (a *AuthInfo) checkWithClientID(typ int, clientid, topic string) (bool, bool) {
func (a *AuthInfo) checkWithClientID(action, clientid, topic string) (bool, bool) {
auth := false
match := false
if a.Val == "*" || a.Val == clientid {
for _, tp := range a.Topics {
des := strings.Replace(tp, "%c", clientid, -1)
if typ == PUB {
if action == PUB {
if pubTopicMatch(topic, des) {
match = true
auth = a.checkAuth(PUB)
}
} else if typ == SUB {
} else if action == SUB {
if subTopicMatch(topic, des) {
match = true
auth = a.checkAuth(SUB)
@@ -46,18 +45,18 @@ func (a *AuthInfo) checkWithClientID(typ int, clientid, topic string) (bool, boo
return match, auth
}
func (a *AuthInfo) checkWithUsername(typ int, username, topic string) (bool, bool) {
func (a *AuthInfo) checkWithUsername(action, username, topic string) (bool, bool) {
auth := false
match := false
if a.Val == "*" || a.Val == username {
for _, tp := range a.Topics {
des := strings.Replace(tp, "%u", username, -1)
if typ == PUB {
if action == PUB {
if pubTopicMatch(topic, des) {
match = true
auth = a.checkAuth(PUB)
}
} else if typ == SUB {
} else if action == SUB {
if subTopicMatch(topic, des) {
match = true
auth = a.checkAuth(SUB)
@@ -68,18 +67,18 @@ func (a *AuthInfo) checkWithUsername(typ int, username, topic string) (bool, boo
return match, auth
}
func (a *AuthInfo) checkWithIP(typ int, ip, topic string) (bool, bool) {
func (a *AuthInfo) checkWithIP(action, ip, topic string) (bool, bool) {
auth := false
match := false
if a.Val == "*" || a.Val == ip {
for _, tp := range a.Topics {
des := tp
if typ == PUB {
if action == PUB {
if pubTopicMatch(topic, des) {
auth = a.checkAuth(PUB)
match = true
}
} else if typ == SUB {
} else if action == SUB {
if subTopicMatch(topic, des) {
auth = a.checkAuth(SUB)
match = true
@@ -90,15 +89,15 @@ func (a *AuthInfo) checkWithIP(typ int, ip, topic string) (bool, bool) {
return match, auth
}
func (a *AuthInfo) checkAuth(typ int) bool {
func (a *AuthInfo) checkAuth(action string) bool {
auth := false
if typ == PUB {
if action == PUB {
if a.Auth == ALLOW && (a.PubSub == PUB || a.PubSub == PUBSUB) {
auth = true
} else if a.Auth == DENY && a.PubSub == SUB {
auth = true
}
} else if typ == SUB {
} else if action == SUB {
if a.Auth == ALLOW && (a.PubSub == SUB || a.PubSub == PUBSUB) {
auth = true
} else if a.Auth == DENY && a.PubSub == PUB {

View File

@@ -1,5 +1,3 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package acl
import (
@@ -7,14 +5,13 @@ import (
"errors"
"io"
"os"
"strconv"
"strings"
)
const (
PUB = 1
SUB = 2
PUBSUB = 3
SUB = "1"
PUB = "2"
PUBSUB = "3"
CLIENTID = "clientid"
USERNAME = "username"
IP = "ip"
@@ -26,7 +23,7 @@ type AuthInfo struct {
Auth string
Typ string
Val string
PubSub int
PubSub string
Topics []string
}
@@ -36,21 +33,18 @@ type ACLConfig struct {
}
func AclConfigLoad(file string) (*ACLConfig, error) {
if file == "" {
file = "./conf/acl.conf"
}
aclconifg := &ACLConfig{
File: file,
Info: make([]*AuthInfo, 0, 4),
}
err := aclconifg.Prase()
err := aclconifg.Parse()
if err != nil {
return nil, err
}
return aclconifg, err
}
func (c *ACLConfig) Prase() error {
func (c *ACLConfig) Parse() error {
f, err := os.Open(c.File)
defer f.Close()
if err != nil {
@@ -81,12 +75,16 @@ func (c *ACLConfig) Prase() error {
parseErr = errors.New("\"" + line + "\" format is error")
break
}
var pubsub int
pubsub, err = strconv.Atoi(tmpArr[3])
if err != nil {
if tmpArr[3] != PUB && tmpArr[3] != SUB && tmpArr[3] != PUBSUB {
parseErr = errors.New("\"" + line + "\" format is error")
break
}
// var pubsub int
// pubsub, err = strconv.Atoi(tmpArr[3])
// if err != nil {
// parseErr = errors.New("\"" + line + "\" format is error")
// break
// }
topicStr := strings.Replace(tmpArr[4], " ", "", -1)
topicStr = strings.Replace(topicStr, "\n", "", -1)
topics := strings.Split(topicStr, ",")
@@ -95,7 +93,7 @@ func (c *ACLConfig) Prase() error {
Typ: tmpArr[1],
Val: tmpArr[2],
Topics: topics,
PubSub: pubsub,
PubSub: tmpArr[3],
}
c.Info = append(c.Info, tmpAuth)
if err != nil {

View File

@@ -1,5 +1,3 @@
/* Copyright (c) 2018, joy.zhou <chowyu08@gmail.com>
*/
package acl
import (

View File

@@ -0,0 +1,179 @@
package authhttp
import (
"encoding/json"
"io"
"io/ioutil"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/fhmq/hmq/logger"
"go.uber.org/zap"
)
//Config device kafka config
type Config struct {
AuthURL string `json:"auth"`
ACLURL string `json:"acl"`
SuperURL string `json:"super"`
}
type authHTTP struct {
client *http.Client
}
var (
config Config
log = logger.Get().Named("authhttp")
httpClient *http.Client
)
//Init init kafak client
func Init() *authHTTP {
content, err := ioutil.ReadFile("./plugins/auth/authhttp/http.json")
if err != nil {
log.Fatal("Read config file error: ", zap.Error(err))
}
// log.Info(string(content))
err = json.Unmarshal(content, &config)
if err != nil {
log.Fatal("Unmarshal config file error: ", zap.Error(err))
}
// fmt.Println("http: config: ", config)
httpClient = &http.Client{
Transport: &http.Transport{
MaxConnsPerHost: 100,
MaxIdleConns: 100,
MaxIdleConnsPerHost: 100,
},
Timeout: time.Second * 100,
}
return &authHTTP{client: httpClient}
}
// CheckConnect check mqtt connect
func (a *authHTTP) CheckConnect(clientID, username, password string) bool {
action := "connect"
{
aCache := checkCache(action, clientID, username, password, "")
if aCache != nil {
if aCache.password == password && aCache.username == username && aCache.action == action {
return true
}
}
}
data := url.Values{}
data.Add("username", username)
data.Add("clientid", clientID)
data.Add("password", password)
req, err := http.NewRequest("POST", config.AuthURL, strings.NewReader(data.Encode()))
if err != nil {
log.Error("new request super: ", zap.Error(err))
return false
}
req.Header.Add("Content-Type", "application/x-www-form-urlencoded")
req.Header.Add("Content-Length", strconv.Itoa(len(data.Encode())))
resp, err := a.client.Do(req)
if err != nil {
log.Error("request super: ", zap.Error(err))
return false
}
defer resp.Body.Close()
io.Copy(ioutil.Discard, resp.Body)
if resp.StatusCode == http.StatusOK {
addCache(action, clientID, username, password, "")
return true
}
return false
}
// //CheckSuper check mqtt connect
// func CheckSuper(clientID, username, password string) bool {
// action := "connect"
// {
// aCache := checkCache(action, clientID, username, password, "")
// if aCache != nil {
// if aCache.password == password && aCache.username == username && aCache.action == action {
// return true
// }
// }
// }
// data := url.Values{}
// data.Add("username", username)
// data.Add("clientid", clientID)
// data.Add("password", password)
// req, err := http.NewRequest("POST", config.SuperURL, strings.NewReader(data.Encode()))
// if err != nil {
// log.Error("new request super: ", zap.Error(err))
// return false
// }
// req.Header.Add("Content-Type", "application/x-www-form-urlencoded")
// req.Header.Add("Content-Length", strconv.Itoa(len(data.Encode())))
// resp, err := httpClient.Do(req)
// if err != nil {
// log.Error("request super: ", zap.Error(err))
// return false
// }
// defer resp.Body.Close()
// io.Copy(ioutil.Discard, resp.Body)
// if resp.StatusCode == http.StatusOK {
// return true
// }
// return false
// }
//CheckACL check mqtt connect
func (a *authHTTP) CheckACL(action, clientID, username, ip, topic string) bool {
{
aCache := checkCache(action, "", username, "", topic)
if aCache != nil {
if aCache.topic == topic && aCache.action == action {
return true
}
}
}
req, err := http.NewRequest("GET", config.ACLURL, nil)
if err != nil {
log.Error("get acl: ", zap.Error(err))
return false
}
data := req.URL.Query()
data.Add("username", username)
data.Add("topic", topic)
data.Add("access", action)
req.URL.RawQuery = data.Encode()
// fmt.Println("req:", req)
resp, err := a.client.Do(req)
if err != nil {
log.Error("request acl: ", zap.Error(err))
return false
}
defer resp.Body.Close()
io.Copy(ioutil.Discard, resp.Body)
if resp.StatusCode == http.StatusOK {
addCache(action, "", username, "", topic)
return true
}
return false
}

View File

@@ -0,0 +1,32 @@
package authhttp
import (
"time"
"github.com/patrickmn/go-cache"
)
type authCache struct {
action string
username string
clientID string
password string
topic string
}
var (
// cache = make(map[string]authCache)
c = cache.New(5*time.Minute, 10*time.Minute)
)
func checkCache(action, clientID, username, password, topic string) *authCache {
authc, found := c.Get(username)
if found {
return authc.(*authCache)
}
return nil
}
func addCache(action, clientID, username, password, topic string) {
c.Set(username, &authCache{action: action, username: username, clientID: clientID, password: password, topic: topic}, cache.DefaultExpiration)
}

View File

@@ -0,0 +1,5 @@
{
"auth": "http://127.0.0.1:9090/mqtt/auth",
"acl": "http://127.0.0.1:9090/mqtt/acl",
"super": "http://127.0.0.1:9090/mqtt/superuser"
}

11
plugins/auth/mock.go Normal file
View File

@@ -0,0 +1,11 @@
package auth
type mockAuth struct{}
func (m *mockAuth) CheckACL(action, clientID, username, ip, topic string) bool {
return true
}
func (m *mockAuth) CheckConnect(clientID, username, password string) bool {
return true
}

50
plugins/bridge/CSVLog.md Normal file
View File

@@ -0,0 +1,50 @@
# CSVLog Plugin For HMQ
This is a bridge implementation for HMQ that allows messages to be logged to a CSV file at runtime.
It can be used for debugging/monitoring purposes, for integration with other systems/platforms, or as an audit trail of messages.
The plugin allows you to define 0, 1, or more filters which determine which messages get bridged. Where no filters are defined the plugin bridges every message. Where one or more filters exist, the plugin applies the filter/s and only brdiges messages that match the filter spec.
The plugin allows you provide a filename for the output file, and also supports three special filenames {LOG},{STDOUT}, and {NULL}. {LOG} results in messages being bridged to the log, {STDOUT} bridges them to Std out, and {NULL} simply skips and returns without an error.
## Configuration
The configiration settings for CSVLog are defined by the struct csvBridgeConfig.
```
type csvBridgeConfig struct {
FileName string `json:"fileName"`
LogFileMaxSizeMB int64 `json:"logFileMaxSizeMB"`
LogFileMaxFiles int64 `json:"logFileMaxFiles"`
WriteIntervalSecs int64 `json:"writeIntervalSecs"`
CommandTopic string `json:"commandTopic"`
Filters []string `json:"filters"`
}
```
| Setting | Description |
| ----------- | ----------- |
| FileName | A complete filename for the output file, or {LOG} to send bridged messages to the log, {STDOUT} to send bridged messages to STDOUT, or {NULL} to not bridge anything at all |
| LogFileMaxSizeMB | The size in megabytes at which the log file is rotated |
| LogFileMaxFiles | The maximum number of rotated logfiles to retain before they're deleted |
| WriteIntervalSecs | The delay before flushing any pending writes to the file |
| CommandTopic | The name of a topic to which commands relating to CSVLog will be sent eg "bridge/CSVLOG/command" |
| Filters | An array of filter specifications which are used to determine which messages are bridged, if there are no filters specified the filter is assumed to be "#" which bridges everything. Filters are specified the same way that topic acls are described|
## Filters
Filters use the same syntax as for ACL permissions.
So a filter can name a specific topic..
"animals/cats" will bridge messages sent to the "animals/cats" topic.
A filter can use the + or # wildcards so
"animals/cats/+" will bridge messages sent to "animals/cats/breeds", "animals/cats/colours" but not "animals/cats/breeds/longhair"
"animals/cats/#" will bridge messages sent to "animals/cats/breeds", "animals/cats/colours", "animals/cats/breeds/longhair", etc
## Commands
Currently two commands can be sent to the CSVLog bridge:
ROTATEFILE - Triggers an immediate rotation of the log file
REALOADCONFIG - Triggers a reload of the CSVLog config file

53
plugins/bridge/bridge.go Normal file
View File

@@ -0,0 +1,53 @@
package bridge
import "github.com/fhmq/hmq/logger"
const (
//Connect mqtt connect
Connect = "connect"
//Publish mqtt publish
Publish = "publish"
//Subscribe mqtt sub
Subscribe = "subscribe"
//Unsubscribe mqtt sub
Unsubscribe = "unsubscribe"
//Disconnect mqtt disconenct
Disconnect = "disconnect"
)
var (
log = logger.Get().Named("bridge")
)
//Elements kafka publish elements
type Elements struct {
ClientID string `json:"clientid"`
Username string `json:"username"`
Topic string `json:"topic"`
Payload string `json:"payload"`
Timestamp int64 `json:"ts"`
Size int32 `json:"size"`
Action string `json:"action"`
}
const (
//Kafka plugin name
Kafka = "kafka"
CSVLog = "csvlog"
)
type BridgeMQ interface {
// Publish return true to cost the message
Publish(e *Elements) (bool, error)
}
func NewBridgeMQ(name string) BridgeMQ {
switch name {
case Kafka:
return InitKafka()
case CSVLog:
return InitCSVLog()
default:
return &mockMQ{}
}
}

414
plugins/bridge/csvlog.go Normal file
View File

@@ -0,0 +1,414 @@
package bridge
/*
Copyright (c) 2021, Gary Barnett @thinkovation. Released under the Apache 2 License
CSVLog is a bridge plugin for HMQ that implements CSV logging of messages. See CSVLog.md for more information
*/
import (
"encoding/csv"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"sort"
"strings"
"sync"
"time"
"go.uber.org/zap"
)
type csvBridgeConfig struct {
FileName string `json:"fileName"`
LogFileMaxSizeMB int64 `json:"logFileMaxSizeMB"`
LogFileMaxFiles int64 `json:"logFileMaxFiles"`
WriteIntervalSecs int64 `json:"writeIntervalSecs"`
CommandTopic string `json:"commandTopic"`
Filters []string `json:"filters"`
}
type csvLog struct {
config csvBridgeConfig
buffer []string
msgchan chan (*Elements)
sync.RWMutex
}
// rotateLog performs a log rotation - copying the current logfile to the base file name plus a timestamp
func (c *csvLog) rotateLog(withPrune bool) error {
c.Lock()
filename := c.config.FileName
c.Unlock()
basename := strings.TrimSuffix(filename, filepath.Ext(filename))
newpath := basename + time.Now().Format("-20060102T150405") + filepath.Ext(filename)
renameError := os.Rename(filename, newpath)
if renameError != nil {
return renameError
}
outfile, _ := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
outfile.Close()
// Whenever we rotate a logfile we prune
if withPrune {
c.logFilePrune()
}
return nil
}
// writeToLog takes an array of elements and writes them to the logfile (or to log or stdout) spefified in
// the configuration
func (c *csvLog) writeToLog(els []Elements) error {
c.RLock()
fname := c.config.FileName
c.RUnlock()
if fname == "" {
fname = "CSVLOG.CSV"
}
if fname == "{LOG}" {
for _, value := range els {
t := time.Unix(value.Timestamp, 0)
log.Info(t.Format("2006-01-02T15:04:05") + " " + value.ClientID + " " + value.Username + " " + value.Action + " " + value.Topic + " " + value.Payload)
}
return nil
}
if fname == "{STDOUT}" {
for _, value := range els {
t := time.Unix(value.Timestamp, 0)
fmt.Println(t.Format("2006-01-02T15:04:05") + " " + value.ClientID + " " + value.Username + " " + value.Action + " " + value.Topic + " " + value.Payload)
}
return nil
}
var mbsize int64
fileStat, fileStatErr := os.Stat(fname)
if fileStatErr != nil {
log.Warn("Could not get CSVLog info. Received Err " + fileStatErr.Error())
mbsize = 0
} else {
mbsize = fileStat.Size() / 1024 / 1024
}
if mbsize > c.config.LogFileMaxSizeMB && c.config.LogFileMaxSizeMB != 0 {
rotateErr := c.rotateLog(true)
if rotateErr != nil {
log.Warn("Unable to rotate outputfile")
}
}
outfile, outfileOpenError := os.OpenFile(fname, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
defer outfile.Close()
if outfileOpenError != nil {
log.Warn("Could not open CSV Log file to write")
return errors.New("Could not write to CSV Log File")
}
writer := csv.NewWriter(outfile)
defer writer.Flush()
for _, value := range els {
t := time.Unix(value.Timestamp, 0)
var outrow = []string{t.Format("2006-01-02T15:04:05"), value.ClientID, value.Username, value.Action, value.Topic, value.Payload}
writeOutRowError := writer.Write(outrow)
if writeOutRowError != nil {
log.Warn("Could not write msg to CSV Log")
}
}
return nil
}
// Worker should be invoked as a goroutine - It listens on the csvlog message channel for incoming messages
// for performance we batch messages into an outqueue and write them in bulk when a timer expires
func (c *csvLog) Worker() {
log.Info("Running CSVLog worker")
var outqueue []Elements
for true {
c.RLock()
waitInterval := c.config.WriteIntervalSecs
c.RUnlock()
timer := time.NewTimer(time.Second * time.Duration(waitInterval))
select {
case p := <-c.msgchan:
c.RLock()
oktopublish := false
// Check to see if any filters are defined. If there are none we assume we're logging everything
if len(c.config.Filters) != 0 {
// We pick up a Read lock here to parse the c.config.Filters string array
// as it's a read lock, and write locks will be rare
// it feels as if this will be fine.
// If there is contention, it _might_ make sense to quickly lock c, get
// the filters and release the lock, then process the filters with no locks
// but I think it's unlikely
for _, filt := range c.config.Filters {
if topicMatch(p.Topic, filt) {
oktopublish = true
break
}
}
} else {
oktopublish = true
}
if oktopublish {
var el Elements
el.Action = p.Action
el.ClientID = p.ClientID
el.Payload = p.Payload
el.Size = p.Size
el.Timestamp = p.Timestamp
el.Topic = p.Topic
el.Username = p.Username
outqueue = append(outqueue, el)
}
c.RUnlock()
break
case <-timer.C:
if len(outqueue) > 0 {
writeResult := c.writeToLog(outqueue)
if writeResult != nil {
log.Warn("Trouble writing to CSV Log")
}
outqueue = nil
}
break
}
}
}
// LoadCSVLogConfig loads the configuration file - it currently looks in
// "./plugins/csvlog/csvlogconfig.json" (following the example of the default location of the kafka plugin config file)
// if it doesn't find it there it looks in two further places - the current directory and
// an "assets" folder under the current directory (This is for compatibility with a couple of deployed)
// implementations.
func LoadCSVLogConfig() csvBridgeConfig {
// Check to see if the CSVLOGCONFFILE environment variable is set and if so
// check that it does actually point to a file
csvLogConfigFile := os.Getenv("CSVLOGCONFFILE")
if csvLogConfigFile != "" {
if _, err := os.Stat(csvLogConfigFile); os.IsNotExist(err) {
csvLogConfigFile = ""
}
}
// If csvLogConfigFile is blank look in the plugins directory,
// then the current directory for the csvLogConfigFile. If it's still not found we use a default config
// If the file does not exist, we use default parameters
if csvLogConfigFile == "" {
csvLogConfigFile = "./plugins/csvlog/csvlogconfig.json"
}
if _, err := os.Stat(csvLogConfigFile); os.IsNotExist(err) {
if _, err := os.Stat("csvlogconfig.json"); os.IsNotExist(err) {
csvLogConfigFile = ""
} else {
csvLogConfigFile = "csvlogconfig.json"
}
}
var configUnmarshalErr error
var config csvBridgeConfig
if csvLogConfigFile != "" {
log.Info("Trying to load config file from " + csvLogConfigFile)
content, err := ioutil.ReadFile(csvLogConfigFile)
if err != nil {
log.Info("Read config file error: ", zap.Error(err))
}
configUnmarshalErr = json.Unmarshal(content, &config)
}
if configUnmarshalErr != nil || config.FileName == "" {
log.Warn("Unable to load csvlog config file, so using default settings")
config.FileName = "/var/log/csvlog.log"
config.CommandTopic = "CSVLOG/command"
config.WriteIntervalSecs = 10
config.LogFileMaxSizeMB = 1
config.LogFileMaxFiles = 4
}
return config
}
// InitCSVLog initialises a CSVLOG plugin
// It does this by loading a config file if one can be found. The default filename follows the same
// convention as the kafka plugin - ie it's in "./plugins/csvlog/csvlogconfig.json" but an
// environment var - CSVLOGCONFFILE - can be set to provide a different location.
//
// Once the config is set the worker is started
func InitCSVLog() *csvLog {
log.Info("Trying to init CSVLOG")
c := &csvLog{config: LoadCSVLogConfig()}
c.msgchan = make(chan *Elements, 200)
//Start the csvlog worker
go c.Worker()
return c
}
// topicMatch accepts a topic name and a filter string, it then evaluates the
// topic against the filter string and returns true if there is a match.
//
// The CSV bridge can be configured with 0, 1 or more filters - Where there are no
// filters specified, every message will be re-published. Where there are filters, any message
// that passes any of the filter tests will be re-published.
func topicMatch(topic string, filter string) bool {
if topic == filter || filter == "#" {
return true
}
topicComponents := strings.Split(topic, "/")
filterComponents := strings.Split(filter, "/")
currentpos := 0
filterComponentsLength := len(filterComponents)
currentFilterComponent := ""
if filterComponentsLength > 0 {
currentFilterComponent = filterComponents[currentpos]
}
for _, topicVal := range topicComponents {
if currentFilterComponent == "" {
return false
}
if currentFilterComponent == "#" {
return true
}
if currentFilterComponent != "+" && currentFilterComponent != topicVal {
return false
}
currentpos++
if filterComponentsLength > currentpos {
currentFilterComponent = filterComponents[currentpos]
} else {
currentFilterComponent = ""
}
}
return true
}
// logFilePrune checks the number of rotated logfiles and prunes them
func (c *csvLog) logFilePrune() error {
// List the rotated files
c.RLock()
filename := c.config.FileName
maxfiles := c.config.LogFileMaxFiles
c.RUnlock()
if maxfiles == 0 {
return nil
}
fileExt := filepath.Ext(filename)
fileDir := filepath.Dir(filename)
baseFileName := strings.TrimSuffix(filepath.Base(filename), fileExt)
files, err := ioutil.ReadDir(fileDir)
if err != nil {
return err
}
var foundFiles []string
for _, file := range files {
if strings.HasPrefix(file.Name(), baseFileName+"-") {
foundFiles = append(foundFiles, file.Name())
}
}
if len(foundFiles) >= int(maxfiles) {
fmt.Println("Found ", len(foundFiles), " files")
sort.Strings(foundFiles)
for i := 0; i < len(foundFiles)-int(maxfiles); i++ {
fileDeleteError := os.Remove(fileDir + "//" + foundFiles[i])
log.Info("Pruning logfile " + fileDir + "//" + foundFiles[i])
if fileDeleteError != nil {
log.Warn("Could not delete file " + fileDir + "//" + foundFiles[i])
}
}
}
return nil
}
// Publish implements the bridge interface - it accepts an Element then checks to see if that element is a
// message published to the admin topic for the plugin
//
func (c *csvLog) Publish(e *Elements) (bool, error) {
// A short-lived lock on c allows us to
// get the Command topic then release the lock
// This then allows us to process the command - which may
// take its a write lock on c (to update values) and then
// return here where we'll pick up a
// read lock to iterate over the c.config.filters
// We're trying to minimise the time spent in this function
// and to limit the overall time spent in any write locks.
c.RLock()
//CSVLOG allows you to configure a CommandTopic which is a topic to which commands affecting the behaviour of CSVLog can be sent
//The simplest would be a message with a payload of "RELOAD" which will reload the configuration allowing configuration changes to be
//made at runtime without restarting the broker
CommandTopic := c.config.CommandTopic
OutFile := c.config.FileName
c.RUnlock()
// If the outfile is set to "{NULL}" we don't do anything with the message - we just return nil
// This feature is here to allow CSVLOG to be enabled/disabled at runtime
if OutFile == "{NULL}" {
return false, nil
}
if e.Topic == CommandTopic {
log.Info("CSVLOG Command Received")
// Process Command
// These are going to be rare ocurrences, so in this implementation
// we will process the command here - but if we _really_ want to
// squeeze delays out, we could have a worker sitting on a
// command channel processing any commands.
if e.Payload == "RELOADCONFIG" {
newConfig := LoadCSVLogConfig()
c.Lock()
c.config = newConfig
c.Unlock()
}
if e.Payload == "ROTATEFILE" {
c.rotateLog(true)
}
if e.Payload == "ROTATEFILENOPRUNE" {
c.rotateLog(false)
}
// We could return without doing anything more here, but
// for now we move ahead with the filter processing on the
// basis that unless we either filter for "all" (with #) or
// filter for the CommandTopic, they won't be logged - but we
// may have a reason for wanting to track commands too
}
// Push the message into the channel and return
// the channel is buffered and is read by a goroutine so this should block for the shortest possible time
c.msgchan <- e
return false, nil
}

View File

@@ -0,0 +1,36 @@
package bridge
import (
"fmt"
"testing"
)
//Test_topicMatch is here to double check the topic matching logic
func Test_topicMatch(t *testing.T) {
tests := []struct {
name string
topic string
filter string
want bool
}{
// Some sample test cases
{name: "Simple", topic: "test", filter: "test", want: true},
{name: "Simple", topic: "test/cat", filter: "test/+", want: true},
{name: "Simple", topic: "test/cat/breed", filter: "test/+", want: false},
{name: "Simple", topic: "test/cat", filter: "test/#", want: true},
{name: "Simple", topic: "test/cat/banana", filter: "test/#", want: true},
{name: "Simple", topic: "test/cat/banana", filter: "test/+", want: false},
{name: "Simple", topic: "test/dog/banana", filter: "test/cat/+", want: false},
{name: "Simple", topic: "test/cat/banana", filter: "test/+/banana", want: true},
}
for _, tt := range tests {
fmt.Println(tt)
t.Run(tt.name, func(t *testing.T) {
if got := topicMatch(tt.topic, tt.filter); got != tt.want {
t.Errorf("topicMatch() = %v, want %v", got, tt.want)
}
})
}
}

156
plugins/bridge/kafka.go Normal file
View File

@@ -0,0 +1,156 @@
package bridge
import (
"encoding/json"
"errors"
"io/ioutil"
"strings"
"time"
"github.com/Shopify/sarama"
"go.uber.org/zap"
)
type kafkaConfig struct {
Addr []string `json:"addr"`
ConnectTopic string `json:"onConnect"`
SubscribeTopic string `json:"onSubscribe"`
PublishTopic string `json:"onPublish"`
UnsubscribeTopic string `json:"onUnsubscribe"`
DisconnectTopic string `json:"onDisconnect"`
DeliverMap map[string]string `json:"deliverMap"`
}
type kafka struct {
kafkaConfig kafkaConfig
kafkaClient sarama.AsyncProducer
}
// InitKafka Init kafka client
func InitKafka() *kafka {
log.Info("start connect kafka....")
content, err := ioutil.ReadFile("./plugins/kafka/kafka.json")
if err != nil {
log.Fatal("Read config file error: ", zap.Error(err))
}
// log.Info(string(content))
var config kafkaConfig
err = json.Unmarshal(content, &config)
if err != nil {
log.Fatal("Unmarshal config file error: ", zap.Error(err))
}
c := &kafka{kafkaConfig: config}
c.connect()
return c
}
//connect
func (k *kafka) connect() {
conf := sarama.NewConfig()
conf.Version = sarama.V1_1_1_0
kafkaClient, err := sarama.NewAsyncProducer(k.kafkaConfig.Addr, conf)
if err != nil {
log.Fatal("create kafka async producer failed: ", zap.Error(err))
}
go func() {
for err := range kafkaClient.Errors() {
log.Error("send msg to kafka failed: ", zap.Error(err))
}
}()
k.kafkaClient = kafkaClient
}
//Publish publish to kafka
func (k *kafka) Publish(e *Elements) (bool, error) {
config := k.kafkaConfig
key := e.ClientID
topics := make(map[string]bool)
switch e.Action {
case Connect:
if config.ConnectTopic != "" {
topics[config.ConnectTopic] = true
}
case Publish:
if config.PublishTopic != "" {
topics[config.PublishTopic] = true
}
// foreach regexp map config
for reg, topic := range config.DeliverMap {
match := matchTopic(reg, e.Topic)
if match {
topics[topic] = true
}
}
case Subscribe:
if config.SubscribeTopic != "" {
topics[config.SubscribeTopic] = true
}
case Unsubscribe:
if config.UnsubscribeTopic != "" {
topics[config.UnsubscribeTopic] = true
}
case Disconnect:
if config.DisconnectTopic != "" {
topics[config.DisconnectTopic] = true
}
default:
return false, errors.New("error action: " + e.Action)
}
return false, k.publish(topics, key, e)
}
func (k *kafka) publish(topics map[string]bool, key string, msg *Elements) error {
payload, err := json.Marshal(msg)
if err != nil {
return err
}
for topic, _ := range topics {
select {
case k.kafkaClient.Input() <- &sarama.ProducerMessage{
Topic: topic,
Key: sarama.ByteEncoder(key),
Value: sarama.ByteEncoder(payload),
}:
continue
case <-time.After(5 * time.Second):
return errors.New("write kafka timeout")
}
}
return nil
}
func match(subTopic []string, topic []string) bool {
if len(subTopic) == 0 {
if len(topic) == 0 {
return true
}
return false
}
if len(topic) == 0 {
if subTopic[0] == "#" {
return true
}
return false
}
if subTopic[0] == "#" {
return true
}
if (subTopic[0] == "+") || (subTopic[0] == topic[0]) {
return match(subTopic[1:], topic[1:])
}
return false
}
func matchTopic(subTopic string, topic string) bool {
return match(strings.Split(subTopic, "/"), strings.Split(topic, "/"))
}

View File

@@ -0,0 +1,14 @@
{
"addr": [
"127.0.0.1:9090"
],
"onConnect": "onConnect",
"onPublish": "onPublish",
"onSubscribe": "onSubscribe",
"onDisconnect": "onDisconnect",
"onUnsubscribe": "onUnsubscribe",
"deliverMap": {
"#": "publish",
"/upload/+/#": "upload"
}
}

7
plugins/bridge/mock.go Normal file
View File

@@ -0,0 +1,7 @@
package bridge
type mockMQ struct{}
func (m *mockMQ) Publish(e *Elements) (bool, error) {
return false, nil
}

56
pool/fixpool.go Normal file
View File

@@ -0,0 +1,56 @@
package pool
import (
"github.com/cespare/xxhash/v2"
)
type WorkerPool struct {
maxWorkers int
taskQueue []chan func()
stoppedChan chan struct{}
}
func New(maxWorkers int) *WorkerPool {
// There must be at least one worker.
if maxWorkers < 1 {
maxWorkers = 1
}
// taskQueue is unbuffered since items are always removed immediately.
pool := &WorkerPool{
taskQueue: make([]chan func(), maxWorkers),
maxWorkers: maxWorkers,
stoppedChan: make(chan struct{}),
}
// Start the task dispatcher.
pool.dispatch()
return pool
}
func (p *WorkerPool) Submit(uid string, task func()) {
idx := xxhash.Sum64([]byte(uid)) % uint64(p.maxWorkers)
if task != nil {
p.taskQueue[idx] <- task
}
}
func (p *WorkerPool) dispatch() {
for i := 0; i < p.maxWorkers; i++ {
p.taskQueue[i] = make(chan func(), 1024)
go startWorker(p.taskQueue[i])
}
}
func startWorker(taskChan chan func()) {
var task func()
var ok bool
for {
task, ok = <-taskChan
if !ok {
break
}
// Execute the task.
task()
}
}

View File

@@ -1,166 +0,0 @@
package pool
import "time"
const (
// This value is the size of the queue that workers register their
// availability to the dispatcher. There may be hundreds of workers, but
// only a small channel is needed to register some of the workers.
readyQueueSize = 16
// If worker pool receives no new work for this period of time, then stop
// a worker goroutine.
idleTimeoutSec = 5
)
type WorkerPool struct {
maxWorkers int
timeout time.Duration
taskQueue chan func()
readyWorkers chan chan func()
stoppedChan chan struct{}
}
func New(maxWorkers int) *WorkerPool {
// There must be at least one worker.
if maxWorkers < 1 {
maxWorkers = 1
}
// taskQueue is unbuffered since items are always removed immediately.
pool := &WorkerPool{
taskQueue: make(chan func()),
maxWorkers: maxWorkers,
readyWorkers: make(chan chan func(), readyQueueSize),
timeout: time.Second * idleTimeoutSec,
stoppedChan: make(chan struct{}),
}
// Start the task dispatcher.
go pool.dispatch()
return pool
}
func (p *WorkerPool) Stop() {
if p.Stopped() {
return
}
close(p.taskQueue)
<-p.stoppedChan
}
func (p *WorkerPool) Stopped() bool {
select {
case <-p.stoppedChan:
return true
default:
}
return false
}
func (p *WorkerPool) Submit(task func()) {
if task != nil {
p.taskQueue <- task
}
}
func (p *WorkerPool) SubmitWait(task func()) {
if task == nil {
return
}
doneChan := make(chan struct{})
p.taskQueue <- func() {
task()
close(doneChan)
}
<-doneChan
}
func (p *WorkerPool) dispatch() {
defer close(p.stoppedChan)
timeout := time.NewTimer(p.timeout)
var workerCount int
var task func()
var ok bool
var workerTaskChan chan func()
startReady := make(chan chan func())
Loop:
for {
timeout.Reset(p.timeout)
select {
case task, ok = <-p.taskQueue:
if !ok {
break Loop
}
// Got a task to do.
select {
case workerTaskChan = <-p.readyWorkers:
// A worker is ready, so give task to worker.
workerTaskChan <- task
default:
// No workers ready.
// Create a new worker, if not at max.
if workerCount < p.maxWorkers {
workerCount++
go func(t func()) {
startWorker(startReady, p.readyWorkers)
// Submit the task when the new worker.
taskChan := <-startReady
taskChan <- t
}(task)
} else {
// Start a goroutine to submit the task when an existing
// worker is ready.
go func(t func()) {
taskChan := <-p.readyWorkers
taskChan <- t
}(task)
}
}
case <-timeout.C:
// Timed out waiting for work to arrive. Kill a ready worker.
if workerCount > 0 {
select {
case workerTaskChan = <-p.readyWorkers:
// A worker is ready, so kill.
close(workerTaskChan)
workerCount--
default:
// No work, but no ready workers. All workers are busy.
}
}
}
}
// Stop all remaining workers as they become ready.
for workerCount > 0 {
workerTaskChan = <-p.readyWorkers
close(workerTaskChan)
workerCount--
}
}
func startWorker(startReady, readyWorkers chan chan func()) {
go func() {
taskChan := make(chan func())
var task func()
var ok bool
// Register availability on starReady channel.
startReady <- taskChan
for {
// Read task from dispatcher.
task, ok = <-taskChan
if !ok {
// Dispatcher has told worker to stop.
break
}
// Execute the task.
task()
// Register availability on readyWorkers channel.
readyWorkers <- taskChan
}
}()
}