Initial commit

This commit is contained in:
2020-01-28 14:59:07 -06:00
commit 3bb2fdfad6
108 changed files with 24266 additions and 0 deletions

2
.gitattributes vendored Normal file
View File

@@ -0,0 +1,2 @@
# Auto detect text files and perform LF normalization
* text=auto

674
LICENSE Normal file
View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

2
README.md Normal file
View File

@@ -0,0 +1,2 @@
# AWS-Device
The files for using AWS IoT and collecting data from various data generators

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
__version__ = "1.4.8"

Binary file not shown.

View File

@@ -0,0 +1,466 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import json
KEY_GROUP_LIST = "GGGroups"
KEY_GROUP_ID = "GGGroupId"
KEY_CORE_LIST = "Cores"
KEY_CORE_ARN = "thingArn"
KEY_CA_LIST = "CAs"
KEY_CONNECTIVITY_INFO_LIST = "Connectivity"
KEY_CONNECTIVITY_INFO_ID = "Id"
KEY_HOST_ADDRESS = "HostAddress"
KEY_PORT_NUMBER = "PortNumber"
KEY_METADATA = "Metadata"
class ConnectivityInfo(object):
"""
Class the stores one set of the connectivity information.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, id, host, port, metadata):
self._id = id
self._host = host
self._port = port
self._metadata = metadata
@property
def id(self):
"""
Connectivity Information Id.
"""
return self._id
@property
def host(self):
"""
Host address.
"""
return self._host
@property
def port(self):
"""
Port number.
"""
return self._port
@property
def metadata(self):
"""
Metadata string.
"""
return self._metadata
class CoreConnectivityInfo(object):
"""
Class that stores the connectivity information for a Greengrass core.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, coreThingArn, groupId):
self._core_thing_arn = coreThingArn
self._group_id = groupId
self._connectivity_info_dict = dict()
@property
def coreThingArn(self):
"""
Thing arn for this Greengrass core.
"""
return self._core_thing_arn
@property
def groupId(self):
"""
Greengrass group id that this Greengrass core belongs to.
"""
return self._group_id
@property
def connectivityInfoList(self):
"""
The list of connectivity information that this Greengrass core has.
"""
return list(self._connectivity_info_dict.values())
def getConnectivityInfo(self, id):
"""
**Description**
Used for quickly accessing a certain set of connectivity information by id.
**Syntax**
.. code:: python
myCoreConnectivityInfo.getConnectivityInfo("CoolId")
**Parameters**
*id* - The id for the desired connectivity information.
**Return**
:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object.
"""
return self._connectivity_info_dict.get(id)
def appendConnectivityInfo(self, connectivityInfo):
"""
**Description**
Used for adding a new set of connectivity information to the list for this Greengrass core. This is used by the
SDK internally. No need to call directly from user scripts.
**Syntax**
.. code:: python
myCoreConnectivityInfo.appendConnectivityInfo(newInfo)
**Parameters**
*connectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object.
**Returns**
None
"""
self._connectivity_info_dict[connectivityInfo.id] = connectivityInfo
class GroupConnectivityInfo(object):
"""
Class that stores the connectivity information for a specific Greengrass group.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, groupId):
self._group_id = groupId
self._core_connectivity_info_dict = dict()
self._ca_list = list()
@property
def groupId(self):
"""
Id for this Greengrass group.
"""
return self._group_id
@property
def coreConnectivityInfoList(self):
"""
A list of Greengrass cores
(:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object) that belong to this
Greengrass group.
"""
return list(self._core_connectivity_info_dict.values())
@property
def caList(self):
"""
A list of CA content strings for this Greengrass group.
"""
return self._ca_list
def getCoreConnectivityInfo(self, coreThingArn):
"""
**Description**
Used to retrieve the corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo`
object by core thing arn.
**Syntax**
.. code:: python
myGroupConnectivityInfo.getCoreConnectivityInfo("YourOwnArnString")
**Parameters**
coreThingArn - Thing arn for the desired Greengrass core.
**Returns**
:code:`AWSIoTPythonSDK.core.greengrass.discovery.CoreConnectivityInfo` object.
"""
return self._core_connectivity_info_dict.get(coreThingArn)
def appendCoreConnectivityInfo(self, coreConnectivityInfo):
"""
**Description**
Used to append new core connectivity information to this group connectivity information. This is used by the
SDK internally. No need to call directly from user scripts.
**Syntax**
.. code:: python
myGroupConnectivityInfo.appendCoreConnectivityInfo(newCoreConnectivityInfo)
**Parameters**
*coreConnectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object.
**Returns**
None
"""
self._core_connectivity_info_dict[coreConnectivityInfo.coreThingArn] = coreConnectivityInfo
def appendCa(self, ca):
"""
**Description**
Used to append new CA content string to this group connectivity information. This is used by the SDK internally.
No need to call directly from user scripts.
**Syntax**
.. code:: python
myGroupConnectivityInfo.appendCa("CaContentString")
**Parameters**
*ca* - Group CA content string.
**Returns**
None
"""
self._ca_list.append(ca)
class DiscoveryInfo(object):
"""
Class that stores the discovery information coming back from the discovery request.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, rawJson):
self._raw_json = rawJson
@property
def rawJson(self):
"""
JSON response string that contains the discovery information. This is reserved in case users want to do
some process by themselves.
"""
return self._raw_json
def getAllCores(self):
"""
**Description**
Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo`
object for this discovery information. The retrieved cores could be from different Greengrass groups. This is
designed for uses who want to iterate through all available cores at the same time, regardless of which group
those cores are in.
**Syntax**
.. code:: python
myDiscoveryInfo.getAllCores()
**Parameters**
None
**Returns**
List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivtyInfo` object.
"""
groups_list = self.getAllGroups()
core_list = list()
for group in groups_list:
core_list.extend(group.coreConnectivityInfoList)
return core_list
def getAllCas(self):
"""
**Description**
Used to retrieve the list of :code:`(groupId, caContent)` pair for this discovery information. The retrieved
pairs could be from different Greengrass groups. This is designed for users who want to iterate through all
available cores/groups/CAs at the same time, regardless of which group those CAs belong to.
**Syntax**
.. code:: python
myDiscoveryInfo.getAllCas()
**Parameters**
None
**Returns**
List of :code:`(groupId, caContent)` string pair, where :code:`caContent` is the CA content string and
:code:`groupId` is the group id that this CA belongs to.
"""
group_list = self.getAllGroups()
ca_list = list()
for group in group_list:
for ca in group.caList:
ca_list.append((group.groupId, ca))
return ca_list
def getAllGroups(self):
"""
**Description**
Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo`
object for this discovery information. This is designed for users who want to iterate through all available
groups that this Greengrass aware device (GGAD) belongs to.
**Syntax**
.. code:: python
myDiscoveryInfo.getAllGroups()
**Parameters**
None
**Returns**
List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object.
"""
groups_dict = self.toObjectAtGroupLevel()
return list(groups_dict.values())
def toObjectAtGroupLevel(self):
"""
**Description**
Used to get a dictionary of Greengrass group discovery information, with group id string as key and the
corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object as the
value. This is designed for users who know exactly which group, which core and which set of connectivity info
they want to use for the Greengrass aware device to connect.
**Syntax**
.. code:: python
# Get to the targeted connectivity information for a specific core in a specific group
groupLevelDiscoveryInfoObj = myDiscoveryInfo.toObjectAtGroupLevel()
groupConnectivityInfoObj = groupLevelDiscoveryInfoObj.toObjectAtGroupLevel("IKnowMyGroupId")
coreConnectivityInfoObj = groupConnectivityInfoObj.getCoreConnectivityInfo("IKnowMyCoreThingArn")
connectivityInfo = coreConnectivityInfoObj.getConnectivityInfo("IKnowMyConnectivityInfoSetId")
# Now retrieve the detailed information
caList = groupConnectivityInfoObj.caList
host = connectivityInfo.host
port = connectivityInfo.port
metadata = connectivityInfo.metadata
# Actual connecting logic follows...
"""
groups_object = json.loads(self._raw_json)
groups_dict = dict()
for group_object in groups_object[KEY_GROUP_LIST]:
group_info = self._decode_group_info(group_object)
groups_dict[group_info.groupId] = group_info
return groups_dict
def _decode_group_info(self, group_object):
group_id = group_object[KEY_GROUP_ID]
group_info = GroupConnectivityInfo(group_id)
for core in group_object[KEY_CORE_LIST]:
core_info = self._decode_core_info(core, group_id)
group_info.appendCoreConnectivityInfo(core_info)
for ca in group_object[KEY_CA_LIST]:
group_info.appendCa(ca)
return group_info
def _decode_core_info(self, core_object, group_id):
core_info = CoreConnectivityInfo(core_object[KEY_CORE_ARN], group_id)
for connectivity_info_object in core_object[KEY_CONNECTIVITY_INFO_LIST]:
connectivity_info = ConnectivityInfo(connectivity_info_object[KEY_CONNECTIVITY_INFO_ID],
connectivity_info_object[KEY_HOST_ADDRESS],
connectivity_info_object[KEY_PORT_NUMBER],
connectivity_info_object.get(KEY_METADATA,''))
core_info.appendConnectivityInfo(connectivity_info)
return core_info

View File

@@ -0,0 +1,426 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryInvalidRequestException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryUnauthorizedException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryDataNotFoundException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryThrottlingException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryFailure
from AWSIoTPythonSDK.core.greengrass.discovery.models import DiscoveryInfo
from AWSIoTPythonSDK.core.protocol.connection.alpn import SSLContextBuilder
import re
import sys
import ssl
import time
import errno
import logging
import socket
import platform
if platform.system() == 'Windows':
EAGAIN = errno.WSAEWOULDBLOCK
else:
EAGAIN = errno.EAGAIN
class DiscoveryInfoProvider(object):
REQUEST_TYPE_PREFIX = "GET "
PAYLOAD_PREFIX = "/greengrass/discover/thing/"
PAYLOAD_SUFFIX = " HTTP/1.1\r\n" # Space in the front
HOST_PREFIX = "Host: "
HOST_SUFFIX = "\r\n\r\n"
HTTP_PROTOCOL = r"HTTP/1.1 "
CONTENT_LENGTH = r"content-length: "
CONTENT_LENGTH_PATTERN = CONTENT_LENGTH + r"([0-9]+)\r\n"
HTTP_RESPONSE_CODE_PATTERN = HTTP_PROTOCOL + r"([0-9]+) "
HTTP_SC_200 = "200"
HTTP_SC_400 = "400"
HTTP_SC_401 = "401"
HTTP_SC_404 = "404"
HTTP_SC_429 = "429"
LOW_LEVEL_RC_COMPLETE = 0
LOW_LEVEL_RC_TIMEOUT = -1
_logger = logging.getLogger(__name__)
def __init__(self, caPath="", certPath="", keyPath="", host="", port=8443, timeoutSec=120):
"""
The class that provides functionality to perform a Greengrass discovery process to the cloud.
Users can perform Greengrass discovery process for a specific Greengrass aware device to retrieve
connectivity/identity information of Greengrass cores within the same group.
**Syntax**
.. code:: python
from AWSIoTPythonSDK.core.greengrass.discovery.providers import DiscoveryInfoProvider
# Create a discovery information provider
myDiscoveryInfoProvider = DiscoveryInfoProvider()
# Create a discovery information provider with custom configuration
myDiscoveryInfoProvider = DiscoveryInfoProvider(caPath=myCAPath, certPath=myCertPath, keyPath=myKeyPath, host=myHost, timeoutSec=myTimeoutSec)
**Parameters**
*caPath* - Path to read the root CA file.
*certPath* - Path to read the certificate file.
*keyPath* - Path to read the private key file.
*host* - String that denotes the host name of the user-specific AWS IoT endpoint.
*port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default.
*timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has
been timed out.
**Returns**
AWSIoTPythonSDK.core.greengrass.discovery.providers.DiscoveryInfoProvider object
"""
self._ca_path = caPath
self._cert_path = certPath
self._key_path = keyPath
self._host = host
self._port = port
self._timeout_sec = timeoutSec
self._expected_exception_map = {
self.HTTP_SC_400 : DiscoveryInvalidRequestException(),
self.HTTP_SC_401 : DiscoveryUnauthorizedException(),
self.HTTP_SC_404 : DiscoveryDataNotFoundException(),
self.HTTP_SC_429 : DiscoveryThrottlingException()
}
def configureEndpoint(self, host, port=8443):
"""
**Description**
Used to configure the host address and port number for the discovery request to hit. Should be called before
the discovery request happens.
**Syntax**
.. code:: python
# Using default port configuration, 8443
myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com")
# Customize port configuration
myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com", port=8888)
**Parameters**
*host* - String that denotes the host name of the user-specific AWS IoT endpoint.
*port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default.
**Returns**
None
"""
self._host = host
self._port = port
def configureCredentials(self, caPath, certPath, keyPath):
"""
**Description**
Used to configure the credentials for discovery request. Should be called before the discovery request happens.
**Syntax**
.. code:: python
myDiscoveryInfoProvider.configureCredentials("my/ca/path", "my/cert/path", "my/key/path")
**Parameters**
*caPath* - Path to read the root CA file.
*certPath* - Path to read the certificate file.
*keyPath* - Path to read the private key file.
**Returns**
None
"""
self._ca_path = caPath
self._cert_path = certPath
self._key_path = keyPath
def configureTimeout(self, timeoutSec):
"""
**Description**
Used to configure the time out in seconds for discovery request sending/response waiting. Should be called before
the discovery request happens.
**Syntax**
.. code:: python
# Configure the time out for discovery to be 10 seconds
myDiscoveryInfoProvider.configureTimeout(10)
**Parameters**
*timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has
been timed out.
**Returns**
None
"""
self._timeout_sec = timeoutSec
def discover(self, thingName):
"""
**Description**
Perform the discovery request for the given Greengrass aware device thing name.
**Syntax**
.. code:: python
myDiscoveryInfoProvider.discover(thingName="myGGAD")
**Parameters**
*thingName* - Greengrass aware device thing name.
**Returns**
:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.DiscoveryInfo` object.
"""
self._logger.info("Starting discover request...")
self._logger.info("Endpoint: " + self._host + ":" + str(self._port))
self._logger.info("Target thing: " + thingName)
sock = self._create_tcp_connection()
ssl_sock = self._create_ssl_connection(sock)
self._raise_on_timeout(self._send_discovery_request(ssl_sock, thingName))
status_code, response_body = self._receive_discovery_response(ssl_sock)
return self._raise_if_not_200(status_code, response_body)
def _create_tcp_connection(self):
self._logger.debug("Creating tcp connection...")
try:
if (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2):
sock = socket.create_connection((self._host, self._port))
else:
sock = socket.create_connection((self._host, self._port), source_address=("", 0))
return sock
except socket.error as err:
if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN:
raise
self._logger.debug("Created tcp connection.")
def _create_ssl_connection(self, sock):
self._logger.debug("Creating ssl connection...")
ssl_protocol_version = ssl.PROTOCOL_SSLv23
if self._port == 443:
ssl_context = SSLContextBuilder()\
.with_ca_certs(self._ca_path)\
.with_cert_key_pair(self._cert_path, self._key_path)\
.with_cert_reqs(ssl.CERT_REQUIRED)\
.with_check_hostname(True)\
.with_ciphers(None)\
.with_alpn_protocols(['x-amzn-http-ca'])\
.build()
ssl_sock = ssl_context.wrap_socket(sock, server_hostname=self._host, do_handshake_on_connect=False)
ssl_sock.do_handshake()
else:
ssl_sock = ssl.wrap_socket(sock,
certfile=self._cert_path,
keyfile=self._key_path,
ca_certs=self._ca_path,
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl_protocol_version)
self._logger.debug("Matching host name...")
if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and sys.version_info[1] < 2):
self._tls_match_hostname(ssl_sock)
else:
ssl.match_hostname(ssl_sock.getpeercert(), self._host)
return ssl_sock
def _tls_match_hostname(self, ssl_sock):
try:
cert = ssl_sock.getpeercert()
except AttributeError:
# the getpeercert can throw Attribute error: object has no attribute 'peer_certificate'
# Don't let that crash the whole client. See also: http://bugs.python.org/issue13721
raise ssl.SSLError('Not connected')
san = cert.get('subjectAltName')
if san:
have_san_dns = False
for (key, value) in san:
if key == 'DNS':
have_san_dns = True
if self._host_matches_cert(self._host.lower(), value.lower()) == True:
return
if key == 'IP Address':
have_san_dns = True
if value.lower() == self._host.lower():
return
if have_san_dns:
# Only check subject if subjectAltName dns not found.
raise ssl.SSLError('Certificate subject does not match remote hostname.')
subject = cert.get('subject')
if subject:
for ((key, value),) in subject:
if key == 'commonName':
if self._host_matches_cert(self._host.lower(), value.lower()) == True:
return
raise ssl.SSLError('Certificate subject does not match remote hostname.')
def _host_matches_cert(self, host, cert_host):
if cert_host[0:2] == "*.":
if cert_host.count("*") != 1:
return False
host_match = host.split(".", 1)[1]
cert_match = cert_host.split(".", 1)[1]
if host_match == cert_match:
return True
else:
return False
else:
if host == cert_host:
return True
else:
return False
def _send_discovery_request(self, ssl_sock, thing_name):
request = self.REQUEST_TYPE_PREFIX + \
self.PAYLOAD_PREFIX + \
thing_name + \
self.PAYLOAD_SUFFIX + \
self.HOST_PREFIX + \
self._host + ":" + str(self._port) + \
self.HOST_SUFFIX
self._logger.debug("Sending discover request: " + request)
start_time = time.time()
desired_length_to_write = len(request)
actual_length_written = 0
while True:
try:
length_written = ssl_sock.write(request.encode("utf-8"))
actual_length_written += length_written
except socket.error as err:
if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE:
pass
if actual_length_written == desired_length_to_write:
return self.LOW_LEVEL_RC_COMPLETE
if start_time + self._timeout_sec < time.time():
return self.LOW_LEVEL_RC_TIMEOUT
def _receive_discovery_response(self, ssl_sock):
self._logger.debug("Receiving discover response header...")
rc1, response_header = self._receive_until(ssl_sock, self._got_two_crlfs)
status_code, body_length = self._handle_discovery_response_header(rc1, response_header.decode("utf-8"))
self._logger.debug("Receiving discover response body...")
rc2, response_body = self._receive_until(ssl_sock, self._got_enough_bytes, body_length)
response_body = self._handle_discovery_response_body(rc2, response_body.decode("utf-8"))
return status_code, response_body
def _receive_until(self, ssl_sock, criteria_function, extra_data=None):
start_time = time.time()
response = bytearray()
number_bytes_read = 0
while True: # Python does not have do-while
try:
response.append(self._convert_to_int_py3(ssl_sock.read(1)))
number_bytes_read += 1
except socket.error as err:
if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE:
pass
if criteria_function((number_bytes_read, response, extra_data)):
return self.LOW_LEVEL_RC_COMPLETE, response
if start_time + self._timeout_sec < time.time():
return self.LOW_LEVEL_RC_TIMEOUT, response
def _convert_to_int_py3(self, input_char):
try:
return ord(input_char)
except:
return input_char
def _got_enough_bytes(self, data):
number_bytes_read, response, target_length = data
return number_bytes_read == int(target_length)
def _got_two_crlfs(self, data):
number_bytes_read, response, extra_data_unused = data
number_of_crlf = 2
has_enough_bytes = number_bytes_read > number_of_crlf * 2 - 1
if has_enough_bytes:
end_of_received = response[number_bytes_read - number_of_crlf * 2 : number_bytes_read]
expected_end_of_response = b"\r\n" * number_of_crlf
return end_of_received == expected_end_of_response
else:
return False
def _handle_discovery_response_header(self, rc, response):
self._raise_on_timeout(rc)
http_status_code_matcher = re.compile(self.HTTP_RESPONSE_CODE_PATTERN)
http_status_code_matched_groups = http_status_code_matcher.match(response)
content_length_matcher = re.compile(self.CONTENT_LENGTH_PATTERN)
content_length_matched_groups = content_length_matcher.search(response)
return http_status_code_matched_groups.group(1), content_length_matched_groups.group(1)
def _handle_discovery_response_body(self, rc, response):
self._raise_on_timeout(rc)
return response
def _raise_on_timeout(self, rc):
if rc == self.LOW_LEVEL_RC_TIMEOUT:
raise DiscoveryTimeoutException()
def _raise_if_not_200(self, status_code, response_body): # response_body here is str in Py3
if status_code != self.HTTP_SC_200:
expected_exception = self._expected_exception_map.get(status_code)
if expected_exception:
raise expected_exception
else:
raise DiscoveryFailure(response_body)
return DiscoveryInfo(response_body)

View File

@@ -0,0 +1,156 @@
# /*
# * Copyright 2010-2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import json
_BASE_THINGS_TOPIC = "$aws/things/"
_NOTIFY_OPERATION = "notify"
_NOTIFY_NEXT_OPERATION = "notify-next"
_GET_OPERATION = "get"
_START_NEXT_OPERATION = "start-next"
_WILDCARD_OPERATION = "+"
_UPDATE_OPERATION = "update"
_ACCEPTED_REPLY = "accepted"
_REJECTED_REPLY = "rejected"
_WILDCARD_REPLY = "#"
#Members of this enum are tuples
_JOB_ID_REQUIRED_INDEX = 1
_JOB_OPERATION_INDEX = 2
_STATUS_KEY = 'status'
_STATUS_DETAILS_KEY = 'statusDetails'
_EXPECTED_VERSION_KEY = 'expectedVersion'
_EXEXCUTION_NUMBER_KEY = 'executionNumber'
_INCLUDE_JOB_EXECUTION_STATE_KEY = 'includeJobExecutionState'
_INCLUDE_JOB_DOCUMENT_KEY = 'includeJobDocument'
_CLIENT_TOKEN_KEY = 'clientToken'
_STEP_TIMEOUT_IN_MINUTES_KEY = 'stepTimeoutInMinutes'
#The type of job topic.
class jobExecutionTopicType(object):
JOB_UNRECOGNIZED_TOPIC = (0, False, '')
JOB_GET_PENDING_TOPIC = (1, False, _GET_OPERATION)
JOB_START_NEXT_TOPIC = (2, False, _START_NEXT_OPERATION)
JOB_DESCRIBE_TOPIC = (3, True, _GET_OPERATION)
JOB_UPDATE_TOPIC = (4, True, _UPDATE_OPERATION)
JOB_NOTIFY_TOPIC = (5, False, _NOTIFY_OPERATION)
JOB_NOTIFY_NEXT_TOPIC = (6, False, _NOTIFY_NEXT_OPERATION)
JOB_WILDCARD_TOPIC = (7, False, _WILDCARD_OPERATION)
#Members of this enum are tuples
_JOB_SUFFIX_INDEX = 1
#The type of reply topic, or #JOB_REQUEST_TYPE for topics that are not replies.
class jobExecutionTopicReplyType(object):
JOB_UNRECOGNIZED_TOPIC_TYPE = (0, '')
JOB_REQUEST_TYPE = (1, '')
JOB_ACCEPTED_REPLY_TYPE = (2, '/' + _ACCEPTED_REPLY)
JOB_REJECTED_REPLY_TYPE = (3, '/' + _REJECTED_REPLY)
JOB_WILDCARD_REPLY_TYPE = (4, '/' + _WILDCARD_REPLY)
_JOB_STATUS_INDEX = 1
class jobExecutionStatus(object):
JOB_EXECUTION_STATUS_NOT_SET = (0, None)
JOB_EXECUTION_QUEUED = (1, 'QUEUED')
JOB_EXECUTION_IN_PROGRESS = (2, 'IN_PROGRESS')
JOB_EXECUTION_FAILED = (3, 'FAILED')
JOB_EXECUTION_SUCCEEDED = (4, 'SUCCEEDED')
JOB_EXECUTION_CANCELED = (5, 'CANCELED')
JOB_EXECUTION_REJECTED = (6, 'REJECTED')
JOB_EXECUTION_UNKNOWN_STATUS = (99, None)
def _getExecutionStatus(jobStatus):
try:
return jobStatus[_JOB_STATUS_INDEX]
except KeyError:
return None
def _isWithoutJobIdTopicType(srcJobExecTopicType):
return (srcJobExecTopicType == jobExecutionTopicType.JOB_GET_PENDING_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_START_NEXT_TOPIC
or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC)
class thingJobManager:
def __init__(self, thingName, clientToken = None):
self._thingName = thingName
self._clientToken = clientToken
def getJobTopic(self, srcJobExecTopicType, srcJobExecTopicReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None):
if self._thingName is None:
return None
#Verify topics that only support request type, actually have request type specified for reply
if (srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) and srcJobExecTopicReplyType != jobExecutionTopicReplyType.JOB_REQUEST_TYPE:
return None
#Verify topics that explicitly do not want a job ID do not have one specified
if (jobId is not None and _isWithoutJobIdTopicType(srcJobExecTopicType)):
return None
#Verify job ID is present if the topic requires one
if jobId is None and srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]:
return None
#Ensure the job operation is a non-empty string
if srcJobExecTopicType[_JOB_OPERATION_INDEX] == '':
return None
if srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]:
return '{0}{1}/jobs/{2}/{3}{4}'.format(_BASE_THINGS_TOPIC, self._thingName, str(jobId), srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX])
elif srcJobExecTopicType == jobExecutionTopicType.JOB_WILDCARD_TOPIC:
return '{0}{1}/jobs/#'.format(_BASE_THINGS_TOPIC, self._thingName)
else:
return '{0}{1}/jobs/{2}{3}'.format(_BASE_THINGS_TOPIC, self._thingName, srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX])
def serializeJobExecutionUpdatePayload(self, status, statusDetails=None, expectedVersion=0, executionNumber=0, includeJobExecutionState=False, includeJobDocument=False, stepTimeoutInMinutes=None):
executionStatus = _getExecutionStatus(status)
if executionStatus is None:
return None
payload = {_STATUS_KEY: executionStatus}
if statusDetails:
payload[_STATUS_DETAILS_KEY] = statusDetails
if expectedVersion > 0:
payload[_EXPECTED_VERSION_KEY] = str(expectedVersion)
if executionNumber > 0:
payload[_EXEXCUTION_NUMBER_KEY] = str(executionNumber)
if includeJobExecutionState:
payload[_INCLUDE_JOB_EXECUTION_STATE_KEY] = True
if includeJobDocument:
payload[_INCLUDE_JOB_DOCUMENT_KEY] = True
if self._clientToken is not None:
payload[_CLIENT_TOKEN_KEY] = self._clientToken
if stepTimeoutInMinutes is not None:
payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes
return json.dumps(payload)
def serializeDescribeJobExecutionPayload(self, executionNumber=0, includeJobDocument=True):
payload = {_INCLUDE_JOB_DOCUMENT_KEY: includeJobDocument}
if executionNumber > 0:
payload[_EXEXCUTION_NUMBER_KEY] = executionNumber
if self._clientToken is not None:
payload[_CLIENT_TOKEN_KEY] = self._clientToken
return json.dumps(payload)
def serializeStartNextPendingJobExecutionPayload(self, statusDetails=None, stepTimeoutInMinutes=None):
payload = {}
if self._clientToken is not None:
payload[_CLIENT_TOKEN_KEY] = self._clientToken
if statusDetails is not None:
payload[_STATUS_DETAILS_KEY] = statusDetails
if stepTimeoutInMinutes is not None:
payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes
return json.dumps(payload)
def serializeClientTokenPayload(self):
return json.dumps({_CLIENT_TOKEN_KEY: self._clientToken}) if self._clientToken is not None else '{}'

View File

@@ -0,0 +1,63 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
try:
import ssl
except:
ssl = None
class SSLContextBuilder(object):
def __init__(self):
self.check_supportability()
self._ssl_context = ssl.create_default_context()
def check_supportability(self):
if ssl is None:
raise RuntimeError("This platform has no SSL/TLS.")
if not hasattr(ssl, "SSLContext"):
raise NotImplementedError("This platform does not support SSLContext. Python 2.7.10+/3.5+ is required.")
if not hasattr(ssl.SSLContext, "set_alpn_protocols"):
raise NotImplementedError("This platform does not support ALPN as TLS extensions. Python 2.7.10+/3.5+ is required.")
def with_ca_certs(self, ca_certs):
self._ssl_context.load_verify_locations(ca_certs)
return self
def with_cert_key_pair(self, cert_file, key_file):
self._ssl_context.load_cert_chain(cert_file, key_file)
return self
def with_cert_reqs(self, cert_reqs):
self._ssl_context.verify_mode = cert_reqs
return self
def with_check_hostname(self, check_hostname):
self._ssl_context.check_hostname = check_hostname
return self
def with_ciphers(self, ciphers):
if ciphers is not None:
self._ssl_context.set_ciphers(ciphers) # set_ciphers() does not allow None input. Use default (do nothing) if None
return self
def with_alpn_protocols(self, alpn_protocols):
self._ssl_context.set_alpn_protocols(alpn_protocols)
return self
def build(self):
return self._ssl_context

View File

@@ -0,0 +1,699 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
# This class implements the progressive backoff logic for auto-reconnect.
# It manages the reconnect wait time for the current reconnect, controling
# when to increase it and when to reset it.
import re
import sys
import ssl
import errno
import struct
import socket
import base64
import time
import threading
import logging
import os
from datetime import datetime
import hashlib
import hmac
from AWSIoTPythonSDK.exception.AWSIoTExceptions import ClientError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssNoKeyInEnvironmentError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssHandShakeError
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
try:
from urllib.parse import quote # Python 3+
except ImportError:
from urllib import quote
# INI config file handling
try:
from configparser import ConfigParser # Python 3+
from configparser import NoOptionError
from configparser import NoSectionError
except ImportError:
from ConfigParser import ConfigParser
from ConfigParser import NoOptionError
from ConfigParser import NoSectionError
class ProgressiveBackOffCore:
# Logger
_logger = logging.getLogger(__name__)
def __init__(self, srcBaseReconnectTimeSecond=1, srcMaximumReconnectTimeSecond=32, srcMinimumConnectTimeSecond=20):
# The base reconnection time in seconds, default 1
self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond
# The maximum reconnection time in seconds, default 32
self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond
# The minimum time in milliseconds that a connection must be maintained in order to be considered stable
# Default 20
self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond
# Current backOff time in seconds, init to equal to 0
self._currentBackoffTimeSecond = 1
# Handler for timer
self._resetBackoffTimer = None
# For custom progressiveBackoff timing configuration
def configTime(self, srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond):
if srcBaseReconnectTimeSecond < 0 or srcMaximumReconnectTimeSecond < 0 or srcMinimumConnectTimeSecond < 0:
self._logger.error("init: Negative time configuration detected.")
raise ValueError("Negative time configuration detected.")
if srcBaseReconnectTimeSecond >= srcMinimumConnectTimeSecond:
self._logger.error("init: Min connect time should be bigger than base reconnect time.")
raise ValueError("Min connect time should be bigger than base reconnect time.")
self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond
self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond
self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond
self._currentBackoffTimeSecond = 1
# Block the reconnect logic for _currentBackoffTimeSecond
# Update the currentBackoffTimeSecond for the next reconnect
# Cancel the in-waiting timer for resetting backOff time
# This should get called only when a disconnect/reconnect happens
def backOff(self):
self._logger.debug("backOff: current backoff time is: " + str(self._currentBackoffTimeSecond) + " sec.")
if self._resetBackoffTimer is not None:
# Cancel the timer
self._resetBackoffTimer.cancel()
# Block the reconnect logic
time.sleep(self._currentBackoffTimeSecond)
# Update the backoff time
if self._currentBackoffTimeSecond == 0:
# This is the first attempt to connect, set it to base
self._currentBackoffTimeSecond = self._baseReconnectTimeSecond
else:
# r_cur = min(2^n*r_base, r_max)
self._currentBackoffTimeSecond = min(self._maximumReconnectTimeSecond, self._currentBackoffTimeSecond * 2)
# Start the timer for resetting _currentBackoffTimeSecond
# Will be cancelled upon calling backOff
def startStableConnectionTimer(self):
self._resetBackoffTimer = threading.Timer(self._minimumConnectTimeSecond,
self._connectionStableThenResetBackoffTime)
self._resetBackoffTimer.start()
def stopStableConnectionTimer(self):
if self._resetBackoffTimer is not None:
# Cancel the timer
self._resetBackoffTimer.cancel()
# Timer callback to reset _currentBackoffTimeSecond
# If the connection is stable for longer than _minimumConnectTimeSecond,
# reset the currentBackoffTimeSecond to _baseReconnectTimeSecond
def _connectionStableThenResetBackoffTime(self):
self._logger.debug(
"stableConnection: Resetting the backoff time to: " + str(self._baseReconnectTimeSecond) + " sec.")
self._currentBackoffTimeSecond = self._baseReconnectTimeSecond
class SigV4Core:
_logger = logging.getLogger(__name__)
def __init__(self):
self._aws_access_key_id = ""
self._aws_secret_access_key = ""
self._aws_session_token = ""
self._credentialConfigFilePath = "~/.aws/credentials"
def setIAMCredentials(self, srcAWSAccessKeyID, srcAWSSecretAccessKey, srcAWSSessionToken):
self._aws_access_key_id = srcAWSAccessKeyID
self._aws_secret_access_key = srcAWSSecretAccessKey
self._aws_session_token = srcAWSSessionToken
def _createAmazonDate(self):
# Returned as a unicode string in Py3.x
amazonDate = []
currentTime = datetime.utcnow()
YMDHMS = currentTime.strftime('%Y%m%dT%H%M%SZ')
YMD = YMDHMS[0:YMDHMS.index('T')]
amazonDate.append(YMD)
amazonDate.append(YMDHMS)
return amazonDate
def _sign(self, key, message):
# Returned as a utf-8 byte string in Py3.x
return hmac.new(key, message.encode('utf-8'), hashlib.sha256).digest()
def _getSignatureKey(self, key, dateStamp, regionName, serviceName):
# Returned as a utf-8 byte string in Py3.x
kDate = self._sign(('AWS4' + key).encode('utf-8'), dateStamp)
kRegion = self._sign(kDate, regionName)
kService = self._sign(kRegion, serviceName)
kSigning = self._sign(kService, 'aws4_request')
return kSigning
def _checkIAMCredentials(self):
# Check custom config
ret = self._checkKeyInCustomConfig()
# Check environment variables
if not ret:
ret = self._checkKeyInEnv()
# Check files
if not ret:
ret = self._checkKeyInFiles()
# All credentials returned as unicode strings in Py3.x
return ret
def _checkKeyInEnv(self):
ret = dict()
self._aws_access_key_id = os.environ.get('AWS_ACCESS_KEY_ID')
self._aws_secret_access_key = os.environ.get('AWS_SECRET_ACCESS_KEY')
self._aws_session_token = os.environ.get('AWS_SESSION_TOKEN')
if self._aws_access_key_id is not None and self._aws_secret_access_key is not None:
ret["aws_access_key_id"] = self._aws_access_key_id
ret["aws_secret_access_key"] = self._aws_secret_access_key
# We do not necessarily need session token...
if self._aws_session_token is not None:
ret["aws_session_token"] = self._aws_session_token
self._logger.debug("IAM credentials from env var.")
return ret
def _checkKeyInINIDefault(self, srcConfigParser, sectionName):
ret = dict()
# Check aws_access_key_id and aws_secret_access_key
try:
ret["aws_access_key_id"] = srcConfigParser.get(sectionName, "aws_access_key_id")
ret["aws_secret_access_key"] = srcConfigParser.get(sectionName, "aws_secret_access_key")
except NoOptionError:
self._logger.warn("Cannot find IAM keyID/secretKey in credential file.")
# We do not continue searching if we cannot even get IAM id/secret right
if len(ret) == 2:
# Check aws_session_token, optional
try:
ret["aws_session_token"] = srcConfigParser.get(sectionName, "aws_session_token")
except NoOptionError:
self._logger.debug("No AWS Session Token found.")
return ret
def _checkKeyInFiles(self):
credentialFile = None
credentialConfig = None
ret = dict()
# Should be compatible with aws cli default credential configuration
# *NIX/Windows
try:
# See if we get the file
credentialConfig = ConfigParser()
credentialFilePath = os.path.expanduser(self._credentialConfigFilePath) # Is it compatible with windows? \/
credentialConfig.read(credentialFilePath)
# Now we have the file, start looking for credentials...
# 'default' section
ret = self._checkKeyInINIDefault(credentialConfig, "default")
if not ret:
# 'DEFAULT' section
ret = self._checkKeyInINIDefault(credentialConfig, "DEFAULT")
self._logger.debug("IAM credentials from file.")
except IOError:
self._logger.debug("No IAM credential configuration file in " + credentialFilePath)
except NoSectionError:
self._logger.error("Cannot find IAM 'default' section.")
return ret
def _checkKeyInCustomConfig(self):
ret = dict()
if self._aws_access_key_id != "" and self._aws_secret_access_key != "":
ret["aws_access_key_id"] = self._aws_access_key_id
ret["aws_secret_access_key"] = self._aws_secret_access_key
# We do not necessarily need session token...
if self._aws_session_token != "":
ret["aws_session_token"] = self._aws_session_token
self._logger.debug("IAM credentials from custom config.")
return ret
def createWebsocketEndpoint(self, host, port, region, method, awsServiceName, path):
# Return the endpoint as unicode string in 3.x
# Gather all the facts
amazonDate = self._createAmazonDate()
amazonDateSimple = amazonDate[0] # Unicode in 3.x
amazonDateComplex = amazonDate[1] # Unicode in 3.x
allKeys = self._checkIAMCredentials() # Unicode in 3.x
if not self._hasCredentialsNecessaryForWebsocket(allKeys):
raise wssNoKeyInEnvironmentError()
else:
# Because of self._hasCredentialsNecessaryForWebsocket(...), keyID and secretKey should not be None from here
keyID = allKeys["aws_access_key_id"]
secretKey = allKeys["aws_secret_access_key"]
# amazonDateSimple and amazonDateComplex are guaranteed not to be None
queryParameters = "X-Amz-Algorithm=AWS4-HMAC-SHA256" + \
"&X-Amz-Credential=" + keyID + "%2F" + amazonDateSimple + "%2F" + region + "%2F" + awsServiceName + "%2Faws4_request" + \
"&X-Amz-Date=" + amazonDateComplex + \
"&X-Amz-Expires=86400" + \
"&X-Amz-SignedHeaders=host" # Unicode in 3.x
hashedPayload = hashlib.sha256(str("").encode('utf-8')).hexdigest() # Unicode in 3.x
# Create the string to sign
signedHeaders = "host"
canonicalHeaders = "host:" + host + "\n"
canonicalRequest = method + "\n" + path + "\n" + queryParameters + "\n" + canonicalHeaders + "\n" + signedHeaders + "\n" + hashedPayload # Unicode in 3.x
hashedCanonicalRequest = hashlib.sha256(str(canonicalRequest).encode('utf-8')).hexdigest() # Unicoede in 3.x
stringToSign = "AWS4-HMAC-SHA256\n" + amazonDateComplex + "\n" + amazonDateSimple + "/" + region + "/" + awsServiceName + "/aws4_request\n" + hashedCanonicalRequest # Unicode in 3.x
# Sign it
signingKey = self._getSignatureKey(secretKey, amazonDateSimple, region, awsServiceName)
signature = hmac.new(signingKey, (stringToSign).encode("utf-8"), hashlib.sha256).hexdigest()
# generate url
url = "wss://" + host + ":" + str(port) + path + '?' + queryParameters + "&X-Amz-Signature=" + signature
# See if we have STS token, if we do, add it
awsSessionTokenCandidate = allKeys.get("aws_session_token")
if awsSessionTokenCandidate is not None and len(awsSessionTokenCandidate) != 0:
aws_session_token = allKeys["aws_session_token"]
url += "&X-Amz-Security-Token=" + quote(aws_session_token.encode("utf-8")) # Unicode in 3.x
self._logger.debug("createWebsocketEndpoint: Websocket URL: " + url)
return url
def _hasCredentialsNecessaryForWebsocket(self, allKeys):
awsAccessKeyIdCandidate = allKeys.get("aws_access_key_id")
awsSecretAccessKeyCandidate = allKeys.get("aws_secret_access_key")
# None value is NOT considered as valid entries
validEntries = awsAccessKeyIdCandidate is not None and awsAccessKeyIdCandidate is not None
if validEntries:
# Empty value is NOT considered as valid entries
validEntries &= (len(awsAccessKeyIdCandidate) != 0 and len(awsSecretAccessKeyCandidate) != 0)
return validEntries
# This is an internal class that buffers the incoming bytes into an
# internal buffer until it gets the full desired length of bytes.
# At that time, this bufferedReader will be reset.
# *Error handling:
# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN),
# leave them to the paho _packet_read for further handling (ignored and try
# again when data is available.
# For other errors, leave them to the paho _packet_read for error reporting.
class _BufferedReader:
_sslSocket = None
_internalBuffer = None
_remainedLength = -1
_bufferingInProgress = False
def __init__(self, sslSocket):
self._sslSocket = sslSocket
self._internalBuffer = bytearray()
self._bufferingInProgress = False
def _reset(self):
self._internalBuffer = bytearray()
self._remainedLength = -1
self._bufferingInProgress = False
def read(self, numberOfBytesToBeBuffered):
if not self._bufferingInProgress: # If last read is completed...
self._remainedLength = numberOfBytesToBeBuffered
self._bufferingInProgress = True # Now we start buffering a new length of bytes
while self._remainedLength > 0: # Read in a loop, always try to read in the remained length
# If the data is temporarily not available, socket.error will be raised and catched by paho
dataChunk = self._sslSocket.read(self._remainedLength)
# There is a chance where the server terminates the connection without closing the socket.
# If that happens, let's raise an exception and enter the reconnect flow.
if not dataChunk:
raise socket.error(errno.ECONNABORTED, 0)
self._internalBuffer.extend(dataChunk) # Buffer the data
self._remainedLength -= len(dataChunk) # Update the remained length
# The requested length of bytes is buffered, recover the context and return it
# Otherwise error should be raised
ret = self._internalBuffer
self._reset()
return ret # This should always be bytearray
# This is the internal class that sends requested data out chunk by chunk according
# to the availablity of the socket write operation. If the requested bytes of data
# (after encoding) needs to be sent out in separate socket write operations (most
# probably be interrupted by the error socket.error (errno = ssl.SSL_ERROR_WANT_WRITE).)
# , the write pointer is stored to ensure that the continued bytes will be sent next
# time this function gets called.
# *Error handling:
# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN),
# leave them to the paho _packet_read for further handling (ignored and try
# again when data is available.
# For other errors, leave them to the paho _packet_read for error reporting.
class _BufferedWriter:
_sslSocket = None
_internalBuffer = None
_writingInProgress = False
_requestedDataLength = -1
def __init__(self, sslSocket):
self._sslSocket = sslSocket
self._internalBuffer = bytearray()
self._writingInProgress = False
self._requestedDataLength = -1
def _reset(self):
self._internalBuffer = bytearray()
self._writingInProgress = False
self._requestedDataLength = -1
# Input data for this function needs to be an encoded wss frame
# Always request for packet[pos=0:] (raw MQTT data)
def write(self, encodedData, payloadLength):
# encodedData should always be bytearray
# Check if we have a frame that is partially sent
if not self._writingInProgress:
self._internalBuffer = encodedData
self._writingInProgress = True
self._requestedDataLength = payloadLength
# Now, write as much as we can
lengthWritten = self._sslSocket.write(self._internalBuffer)
self._internalBuffer = self._internalBuffer[lengthWritten:]
# This MQTT packet has been sent out in a wss frame, completely
if len(self._internalBuffer) == 0:
ret = self._requestedDataLength
self._reset()
return ret
# This socket write is half-baked...
else:
return 0 # Ensure that the 'pos' inside the MQTT packet never moves since we have not finished the transmission of this encoded frame
class SecuredWebSocketCore:
# Websocket Constants
_OP_CONTINUATION = 0x0
_OP_TEXT = 0x1
_OP_BINARY = 0x2
_OP_CONNECTION_CLOSE = 0x8
_OP_PING = 0x9
_OP_PONG = 0xa
# Websocket Connect Status
_WebsocketConnectInit = -1
_WebsocketDisconnected = 1
_logger = logging.getLogger(__name__)
def __init__(self, socket, hostAddress, portNumber, AWSAccessKeyID="", AWSSecretAccessKey="", AWSSessionToken=""):
self._connectStatus = self._WebsocketConnectInit
# Handlers
self._sslSocket = socket
self._sigV4Handler = self._createSigV4Core()
self._sigV4Handler.setIAMCredentials(AWSAccessKeyID, AWSSecretAccessKey, AWSSessionToken)
# Endpoint Info
self._hostAddress = hostAddress
self._portNumber = portNumber
# Section Flags
self._hasOpByte = False
self._hasPayloadLengthFirst = False
self._hasPayloadLengthExtended = False
self._hasMaskKey = False
self._hasPayload = False
# Properties for current websocket frame
self._isFIN = False
self._RSVBits = None
self._opCode = None
self._needMaskKey = False
self._payloadLengthBytesLength = 1
self._payloadLength = 0
self._maskKey = None
self._payloadDataBuffer = bytearray() # Once the whole wss connection is lost, there is no need to keep the buffered payload
try:
self._handShake(hostAddress, portNumber)
except wssNoKeyInEnvironmentError: # Handle SigV4 signing and websocket handshaking errors
raise ValueError("No Access Key/KeyID Error")
except wssHandShakeError:
raise ValueError("Websocket Handshake Error")
except ClientError as e:
raise ValueError(e.message)
# Now we have a socket with secured websocket...
self._bufferedReader = _BufferedReader(self._sslSocket)
self._bufferedWriter = _BufferedWriter(self._sslSocket)
def _createSigV4Core(self):
return SigV4Core()
def _generateMaskKey(self):
return bytearray(os.urandom(4))
# os.urandom returns ascii str in 2.x, converted to bytearray
# os.urandom returns bytes in 3.x, converted to bytearray
def _reset(self): # Reset the context for wss frame reception
# Control info
self._hasOpByte = False
self._hasPayloadLengthFirst = False
self._hasPayloadLengthExtended = False
self._hasMaskKey = False
self._hasPayload = False
# Frame Info
self._isFIN = False
self._RSVBits = None
self._opCode = None
self._needMaskKey = False
self._payloadLengthBytesLength = 1
self._payloadLength = 0
self._maskKey = None
# Never reset the payloadData since we might have fragmented MQTT data from the pervious frame
def _generateWSSKey(self):
return base64.b64encode(os.urandom(128)) # Bytes
def _verifyWSSResponse(self, response, clientKey):
# Check if it is a 101 response
rawResponse = response.strip().lower()
if b"101 switching protocols" not in rawResponse or b"upgrade: websocket" not in rawResponse or b"connection: upgrade" not in rawResponse:
return False
# Parse out the sec-websocket-accept
WSSAcceptKeyIndex = response.strip().index(b"sec-websocket-accept: ") + len(b"sec-websocket-accept: ")
rawSecWebSocketAccept = response.strip()[WSSAcceptKeyIndex:].split(b"\r\n")[0].strip()
# Verify the WSSAcceptKey
return self._verifyWSSAcceptKey(rawSecWebSocketAccept, clientKey)
def _verifyWSSAcceptKey(self, srcAcceptKey, clientKey):
GUID = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
verifyServerAcceptKey = base64.b64encode((hashlib.sha1(clientKey + GUID)).digest()) # Bytes
return srcAcceptKey == verifyServerAcceptKey
def _handShake(self, hostAddress, portNumber):
CRLF = "\r\n"
IOT_ENDPOINT_PATTERN = r"^[0-9a-zA-Z]+(\.ats|-ats)?\.iot\.(.*)\.amazonaws\..*"
matched = re.compile(IOT_ENDPOINT_PATTERN, re.IGNORECASE).match(hostAddress)
if not matched:
raise ClientError("Invalid endpoint pattern for wss: %s" % hostAddress)
region = matched.group(2)
signedURL = self._sigV4Handler.createWebsocketEndpoint(hostAddress, portNumber, region, "GET", "iotdata", "/mqtt")
# Now we got a signedURL
path = signedURL[signedURL.index("/mqtt"):]
# Assemble HTTP request headers
Method = "GET " + path + " HTTP/1.1" + CRLF
Host = "Host: " + hostAddress + CRLF
Connection = "Connection: " + "Upgrade" + CRLF
Upgrade = "Upgrade: " + "websocket" + CRLF
secWebSocketVersion = "Sec-WebSocket-Version: " + "13" + CRLF
rawSecWebSocketKey = self._generateWSSKey() # Bytes
secWebSocketKey = "sec-websocket-key: " + rawSecWebSocketKey.decode('utf-8') + CRLF # Should be randomly generated...
secWebSocketProtocol = "Sec-WebSocket-Protocol: " + "mqttv3.1" + CRLF
secWebSocketExtensions = "Sec-WebSocket-Extensions: " + "permessage-deflate; client_max_window_bits" + CRLF
# Send the HTTP request
# Ensure that we are sending bytes, not by any chance unicode string
handshakeBytes = Method + Host + Connection + Upgrade + secWebSocketVersion + secWebSocketProtocol + secWebSocketExtensions + secWebSocketKey + CRLF
handshakeBytes = handshakeBytes.encode('utf-8')
self._sslSocket.write(handshakeBytes)
# Read it back (Non-blocking socket)
timeStart = time.time()
wssHandshakeResponse = bytearray()
while len(wssHandshakeResponse) == 0:
try:
wssHandshakeResponse += self._sslSocket.read(1024) # Response is always less than 1024 bytes
except socket.error as err:
if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE:
if time.time() - timeStart > self._getTimeoutSec():
raise err # We make sure that reconnect gets retried in Paho upon a wss reconnect response timeout
else:
raise err
# Verify response
# Now both wssHandshakeResponse and rawSecWebSocketKey are byte strings
if not self._verifyWSSResponse(wssHandshakeResponse, rawSecWebSocketKey):
raise wssHandShakeError()
else:
pass
def _getTimeoutSec(self):
return DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
# Used to create a single wss frame
# Assume that the maximum length of a MQTT packet never exceeds the maximum length
# for a wss frame. Therefore, the FIN bit for the encoded frame will always be 1.
# Frames are encoded as BINARY frames.
def _encodeFrame(self, rawPayload, opCode, masked=1):
ret = bytearray()
# Op byte
opByte = 0x80 | opCode # Always a FIN, no RSV bits
ret.append(opByte)
# Payload Length bytes
maskBit = masked
payloadLength = len(rawPayload)
if payloadLength <= 125:
ret.append((maskBit << 7) | payloadLength)
elif payloadLength <= 0xffff: # 16-bit unsigned int
ret.append((maskBit << 7) | 126)
ret.extend(struct.pack("!H", payloadLength))
elif payloadLength <= 0x7fffffffffffffff: # 64-bit unsigned int (most significant bit must be 0)
ret.append((maskBit << 7) | 127)
ret.extend(struct.pack("!Q", payloadLength))
else: # Overflow
raise ValueError("Exceeds the maximum number of bytes for a single websocket frame.")
if maskBit == 1:
# Mask key bytes
maskKey = self._generateMaskKey()
ret.extend(maskKey)
# Mask the payload
payloadBytes = bytearray(rawPayload)
if maskBit == 1:
for i in range(0, payloadLength):
payloadBytes[i] ^= maskKey[i % 4]
ret.extend(payloadBytes)
# Return the assembled wss frame
return ret
# Used for the wss client to close a wss connection
# Create and send a masked wss closing frame
def _closeWssConnection(self):
# Frames sent from client to server must be masked
self._sslSocket.write(self._encodeFrame(b"", self._OP_CONNECTION_CLOSE, masked=1))
# Used for the wss client to respond to a wss PING from server
# Create and send a masked PONG frame
def _sendPONG(self):
# Frames sent from client to server must be masked
self._sslSocket.write(self._encodeFrame(b"", self._OP_PONG, masked=1))
# Override sslSocket read. Always read from the wss internal payload buffer, which
# contains the masked MQTT packet. This read will decode ONE wss frame every time
# and load in the payload for MQTT _packet_read. At any time, MQTT _packet_read
# should be able to read a complete MQTT packet from the payload (buffered per wss
# frame payload). If the MQTT packet is break into separate wss frames, different
# chunks will be buffered in separate frames and MQTT _packet_read will not be able
# to collect a complete MQTT packet to operate on until the necessary payload is
# fully buffered.
# If the requested number of bytes are not available, SSL_ERROR_WANT_READ will be
# raised to trigger another call of _packet_read when the data is available again.
def read(self, numberOfBytes):
# Check if we have enough data for paho
# _payloadDataBuffer will not be empty ony when the payload of a new wss frame
# has been unmasked.
if len(self._payloadDataBuffer) >= numberOfBytes:
ret = self._payloadDataBuffer[0:numberOfBytes]
self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:]
# struct.unpack(fmt, string) # Py2.x
# struct.unpack(fmt, buffer) # Py3.x
# Here ret is always in bytes (buffer interface)
if sys.version_info[0] < 3: # Py2.x
ret = str(ret)
return ret
# Emmm, We don't. Try to buffer from the socket (It's a new wss frame).
if not self._hasOpByte: # Check if we need to buffer OpByte
opByte = self._bufferedReader.read(1)
self._isFIN = (opByte[0] & 0x80) == 0x80
self._RSVBits = (opByte[0] & 0x70)
self._opCode = (opByte[0] & 0x0f)
self._hasOpByte = True # Finished buffering opByte
# Check if any of the RSV bits are set, if so, close the connection
# since client never sends negotiated extensions
if self._RSVBits != 0x0:
self._closeWssConnection()
self._connectStatus = self._WebsocketDisconnected
self._payloadDataBuffer = bytearray()
raise socket.error(ssl.SSL_ERROR_WANT_READ, "RSV bits set with NO negotiated extensions.")
if not self._hasPayloadLengthFirst: # Check if we need to buffer First Payload Length byte
payloadLengthFirst = self._bufferedReader.read(1)
self._hasPayloadLengthFirst = True # Finished buffering first byte of payload length
self._needMaskKey = (payloadLengthFirst[0] & 0x80) == 0x80
payloadLengthFirstByteArray = bytearray()
payloadLengthFirstByteArray.extend(payloadLengthFirst)
self._payloadLength = (payloadLengthFirstByteArray[0] & 0x7f)
if self._payloadLength == 126:
self._payloadLengthBytesLength = 2
self._hasPayloadLengthExtended = False # Force to buffer the extended
elif self._payloadLength == 127:
self._payloadLengthBytesLength = 8
self._hasPayloadLengthExtended = False # Force to buffer the extended
else: # _payloadLength <= 125:
self._hasPayloadLengthExtended = True # No need to buffer extended payload length
if not self._hasPayloadLengthExtended: # Check if we need to buffer Extended Payload Length bytes
payloadLengthExtended = self._bufferedReader.read(self._payloadLengthBytesLength)
self._hasPayloadLengthExtended = True
if sys.version_info[0] < 3:
payloadLengthExtended = str(payloadLengthExtended)
if self._payloadLengthBytesLength == 2:
self._payloadLength = struct.unpack("!H", payloadLengthExtended)[0]
else: # _payloadLengthBytesLength == 8
self._payloadLength = struct.unpack("!Q", payloadLengthExtended)[0]
if self._needMaskKey: # Response from server is masked, close the connection
self._closeWssConnection()
self._connectStatus = self._WebsocketDisconnected
self._payloadDataBuffer = bytearray()
raise socket.error(ssl.SSL_ERROR_WANT_READ, "Server response masked, closing connection and try again.")
if not self._hasPayload: # Check if we need to buffer the payload
payloadForThisFrame = self._bufferedReader.read(self._payloadLength)
self._hasPayload = True
# Client side should never received a masked packet from the server side
# Unmask it as needed
#if self._needMaskKey:
# for i in range(0, self._payloadLength):
# payloadForThisFrame[i] ^= self._maskKey[i % 4]
# Append it to the internal payload buffer
self._payloadDataBuffer.extend(payloadForThisFrame)
# Now we have the complete wss frame, reset the context
# Check to see if it is a wss closing frame
if self._opCode == self._OP_CONNECTION_CLOSE:
self._connectStatus = self._WebsocketDisconnected
self._payloadDataBuffer = bytearray() # Ensure that once the wss closing frame comes, we have nothing to read and start all over again
# Check to see if it is a wss PING frame
if self._opCode == self._OP_PING:
self._sendPONG() # Nothing more to do here, if the transmission of the last wssMQTT packet is not finished, it will continue
self._reset()
# Check again if we have enough data for paho
if len(self._payloadDataBuffer) >= numberOfBytes:
ret = self._payloadDataBuffer[0:numberOfBytes]
self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:]
# struct.unpack(fmt, string) # Py2.x
# struct.unpack(fmt, buffer) # Py3.x
# Here ret is always in bytes (buffer interface)
if sys.version_info[0] < 3: # Py2.x
ret = str(ret)
return ret
else: # Fragmented MQTT packets in separate wss frames
raise socket.error(ssl.SSL_ERROR_WANT_READ, "Not a complete MQTT packet payload within this wss frame.")
def write(self, bytesToBeSent):
# When there is a disconnection, select will report a TypeError which triggers the reconnect.
# In reconnect, Paho will set the socket object (mocked by wss) to None, blocking other ops
# before a connection is re-established.
# This 'low-level' socket write op should always be able to write to plain socket.
# Error reporting is performed by Python socket itself.
# Wss closing frame handling is performed in the wss read.
return self._bufferedWriter.write(self._encodeFrame(bytesToBeSent, self._OP_BINARY, 1), len(bytesToBeSent))
def close(self):
if self._sslSocket is not None:
self._sslSocket.close()
self._sslSocket = None
def getpeercert(self):
return self._sslSocket.getpeercert()
def getSSLSocket(self):
if self._connectStatus != self._WebsocketDisconnected:
return self._sslSocket
else:
return None # Leave the sslSocket to Paho to close it. (_ssl.close() -> wssCore.close())

View File

@@ -0,0 +1,244 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import ssl
import logging
from threading import Lock
from numbers import Number
import AWSIoTPythonSDK.core.protocol.paho.client as mqtt
from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS
from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids
class ClientStatus(object):
IDLE = 0
CONNECT = 1
RESUBSCRIBE = 2
DRAINING = 3
STABLE = 4
USER_DISCONNECT = 5
ABNORMAL_DISCONNECT = 6
class ClientStatusContainer(object):
def __init__(self):
self._status = ClientStatus.IDLE
def get_status(self):
return self._status
def set_status(self, status):
if ClientStatus.USER_DISCONNECT == self._status: # If user requests to disconnect, no status updates other than user connect
if ClientStatus.CONNECT == status:
self._status = status
else:
self._status = status
class InternalAsyncMqttClient(object):
_logger = logging.getLogger(__name__)
def __init__(self, client_id, clean_session, protocol, use_wss):
self._paho_client = self._create_paho_client(client_id, clean_session, None, protocol, use_wss)
self._use_wss = use_wss
self._event_callback_map_lock = Lock()
self._event_callback_map = dict()
def _create_paho_client(self, client_id, clean_session, user_data, protocol, use_wss):
self._logger.debug("Initializing MQTT layer...")
return mqtt.Client(client_id, clean_session, user_data, protocol, use_wss)
# TODO: Merge credentials providers configuration into one
def set_cert_credentials_provider(self, cert_credentials_provider):
# History issue from Yun SDK where AR9331 embedded Linux only have Python 2.7.3
# pre-installed. In this version, TLSv1_2 is not even an option.
# SSLv23 is a work-around which selects the highest TLS version between the client
# and service. If user installs opensslv1.0.1+, this option will work fine for Mutual
# Auth.
# Note that we cannot force TLSv1.2 for Mutual Auth. in Python 2.7.3 and TLS support
# in Python only starts from Python2.7.
# See also: https://docs.python.org/2/library/ssl.html#ssl.PROTOCOL_SSLv23
if self._use_wss:
ca_path = cert_credentials_provider.get_ca_path()
self._paho_client.tls_set(ca_certs=ca_path, cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23)
else:
ca_path = cert_credentials_provider.get_ca_path()
cert_path = cert_credentials_provider.get_cert_path()
key_path = cert_credentials_provider.get_key_path()
self._paho_client.tls_set(ca_certs=ca_path,certfile=cert_path, keyfile=key_path,
cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23)
def set_iam_credentials_provider(self, iam_credentials_provider):
self._paho_client.configIAMCredentials(iam_credentials_provider.get_access_key_id(),
iam_credentials_provider.get_secret_access_key(),
iam_credentials_provider.get_session_token())
def set_endpoint_provider(self, endpoint_provider):
self._endpoint_provider = endpoint_provider
def configure_last_will(self, topic, payload, qos, retain=False):
self._paho_client.will_set(topic, payload, qos, retain)
def configure_alpn_protocols(self, alpn_protocols):
self._paho_client.config_alpn_protocols(alpn_protocols)
def clear_last_will(self):
self._paho_client.will_clear()
def set_username_password(self, username, password=None):
self._paho_client.username_pw_set(username, password)
def set_socket_factory(self, socket_factory):
self._paho_client.socket_factory_set(socket_factory)
def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec):
self._paho_client.setBackoffTiming(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec)
def connect(self, keep_alive_sec, ack_callback=None):
host = self._endpoint_provider.get_host()
port = self._endpoint_provider.get_port()
with self._event_callback_map_lock:
self._logger.debug("Filling in fixed event callbacks: CONNACK, DISCONNECT, MESSAGE")
self._event_callback_map[FixedEventMids.CONNACK_MID] = self._create_combined_on_connect_callback(ack_callback)
self._event_callback_map[FixedEventMids.DISCONNECT_MID] = self._create_combined_on_disconnect_callback(None)
self._event_callback_map[FixedEventMids.MESSAGE_MID] = self._create_converted_on_message_callback()
rc = self._paho_client.connect(host, port, keep_alive_sec)
if MQTT_ERR_SUCCESS == rc:
self.start_background_network_io()
return rc
def start_background_network_io(self):
self._logger.debug("Starting network I/O thread...")
self._paho_client.loop_start()
def stop_background_network_io(self):
self._logger.debug("Stopping network I/O thread...")
self._paho_client.loop_stop()
def disconnect(self, ack_callback=None):
with self._event_callback_map_lock:
rc = self._paho_client.disconnect()
if MQTT_ERR_SUCCESS == rc:
self._logger.debug("Filling in custom disconnect event callback...")
combined_on_disconnect_callback = self._create_combined_on_disconnect_callback(ack_callback)
self._event_callback_map[FixedEventMids.DISCONNECT_MID] = combined_on_disconnect_callback
return rc
def _create_combined_on_connect_callback(self, ack_callback):
def combined_on_connect_callback(mid, data):
self.on_online()
if ack_callback:
ack_callback(mid, data)
return combined_on_connect_callback
def _create_combined_on_disconnect_callback(self, ack_callback):
def combined_on_disconnect_callback(mid, data):
self.on_offline()
if ack_callback:
ack_callback(mid, data)
return combined_on_disconnect_callback
def _create_converted_on_message_callback(self):
def converted_on_message_callback(mid, data):
self.on_message(data)
return converted_on_message_callback
# For client online notification
def on_online(self):
pass
# For client offline notification
def on_offline(self):
pass
# For client message reception notification
def on_message(self, message):
pass
def publish(self, topic, payload, qos, retain=False, ack_callback=None):
with self._event_callback_map_lock:
rc, mid = self._paho_client.publish(topic, payload, qos, retain)
if MQTT_ERR_SUCCESS == rc and qos > 0 and ack_callback:
self._logger.debug("Filling in custom puback (QoS>0) event callback...")
self._event_callback_map[mid] = ack_callback
return rc, mid
def subscribe(self, topic, qos, ack_callback=None):
with self._event_callback_map_lock:
rc, mid = self._paho_client.subscribe(topic, qos)
if MQTT_ERR_SUCCESS == rc and ack_callback:
self._logger.debug("Filling in custom suback event callback...")
self._event_callback_map[mid] = ack_callback
return rc, mid
def unsubscribe(self, topic, ack_callback=None):
with self._event_callback_map_lock:
rc, mid = self._paho_client.unsubscribe(topic)
if MQTT_ERR_SUCCESS == rc and ack_callback:
self._logger.debug("Filling in custom unsuback event callback...")
self._event_callback_map[mid] = ack_callback
return rc, mid
def register_internal_event_callbacks(self, on_connect, on_disconnect, on_publish, on_subscribe, on_unsubscribe, on_message):
self._logger.debug("Registering internal event callbacks to MQTT layer...")
self._paho_client.on_connect = on_connect
self._paho_client.on_disconnect = on_disconnect
self._paho_client.on_publish = on_publish
self._paho_client.on_subscribe = on_subscribe
self._paho_client.on_unsubscribe = on_unsubscribe
self._paho_client.on_message = on_message
def unregister_internal_event_callbacks(self):
self._logger.debug("Unregistering internal event callbacks from MQTT layer...")
self._paho_client.on_connect = None
self._paho_client.on_disconnect = None
self._paho_client.on_publish = None
self._paho_client.on_subscribe = None
self._paho_client.on_unsubscribe = None
self._paho_client.on_message = None
def invoke_event_callback(self, mid, data=None):
with self._event_callback_map_lock:
event_callback = self._event_callback_map.get(mid)
# For invoking the event callback, we do not need to acquire the lock
if event_callback:
self._logger.debug("Invoking custom event callback...")
if data is not None:
event_callback(mid=mid, data=data)
else:
event_callback(mid=mid)
if isinstance(mid, Number): # Do NOT remove callbacks for CONNACK/DISCONNECT/MESSAGE
self._logger.debug("This custom event callback is for pub/sub/unsub, removing it after invocation...")
with self._event_callback_map_lock:
del self._event_callback_map[mid]
def remove_event_callback(self, mid):
with self._event_callback_map_lock:
if mid in self._event_callback_map:
self._logger.debug("Removing custom event callback...")
del self._event_callback_map[mid]
def clean_up_event_callbacks(self):
with self._event_callback_map_lock:
self._event_callback_map.clear()
def get_event_callback_map(self):
return self._event_callback_map

View File

@@ -0,0 +1,20 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC = 30
DEFAULT_OPERATION_TIMEOUT_SEC = 5
DEFAULT_DRAINING_INTERNAL_SEC = 0.5
METRICS_PREFIX = "?SDK=Python&Version="
ALPN_PROTCOLS = "x-amzn-mqtt-ca"

View File

@@ -0,0 +1,29 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class EventTypes(object):
CONNACK = 0
DISCONNECT = 1
PUBACK = 2
SUBACK = 3
UNSUBACK = 4
MESSAGE = 5
class FixedEventMids(object):
CONNACK_MID = "CONNECTED"
DISCONNECT_MID = "DISCONNECTED"
MESSAGE_MID = "MESSAGE"
QUEUED_MID = "QUEUED"

View File

@@ -0,0 +1,87 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import logging
from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes
class AppendResults(object):
APPEND_FAILURE_QUEUE_FULL = -1
APPEND_FAILURE_QUEUE_DISABLED = -2
APPEND_SUCCESS = 0
class OfflineRequestQueue(list):
_logger = logging.getLogger(__name__)
def __init__(self, max_size, drop_behavior=DropBehaviorTypes.DROP_NEWEST):
if not isinstance(max_size, int) or not isinstance(drop_behavior, int):
self._logger.error("init: MaximumSize/DropBehavior must be integer.")
raise TypeError("MaximumSize/DropBehavior must be integer.")
if drop_behavior != DropBehaviorTypes.DROP_OLDEST and drop_behavior != DropBehaviorTypes.DROP_NEWEST:
self._logger.error("init: Drop behavior not supported.")
raise ValueError("Drop behavior not supported.")
list.__init__([])
self._drop_behavior = drop_behavior
# When self._maximumSize > 0, queue is limited
# When self._maximumSize == 0, queue is disabled
# When self._maximumSize < 0. queue is infinite
self._max_size = max_size
def _is_enabled(self):
return self._max_size != 0
def _need_drop_messages(self):
# Need to drop messages when:
# 1. Queue is limited and full
# 2. Queue is disabled
is_queue_full = len(self) >= self._max_size
is_queue_limited = self._max_size > 0
is_queue_disabled = not self._is_enabled()
return (is_queue_full and is_queue_limited) or is_queue_disabled
def set_behavior_drop_newest(self):
self._drop_behavior = DropBehaviorTypes.DROP_NEWEST
def set_behavior_drop_oldest(self):
self._drop_behavior = DropBehaviorTypes.DROP_OLDEST
# Override
# Append to a queue with a limited size.
# Return APPEND_SUCCESS if the append is successful
# Return APPEND_FAILURE_QUEUE_FULL if the append failed because the queue is full
# Return APPEND_FAILURE_QUEUE_DISABLED if the append failed because the queue is disabled
def append(self, data):
ret = AppendResults.APPEND_SUCCESS
if self._is_enabled():
if self._need_drop_messages():
# We should drop the newest
if DropBehaviorTypes.DROP_NEWEST == self._drop_behavior:
self._logger.warn("append: Full queue. Drop the newest: " + str(data))
ret = AppendResults.APPEND_FAILURE_QUEUE_FULL
# We should drop the oldest
else:
current_oldest = super(OfflineRequestQueue, self).pop(0)
self._logger.warn("append: Full queue. Drop the oldest: " + str(current_oldest))
super(OfflineRequestQueue, self).append(data)
ret = AppendResults.APPEND_FAILURE_QUEUE_FULL
else:
self._logger.debug("append: Add new element: " + str(data))
super(OfflineRequestQueue, self).append(data)
else:
self._logger.debug("append: Queue is disabled. Drop the message: " + str(data))
ret = AppendResults.APPEND_FAILURE_QUEUE_DISABLED
return ret

View File

@@ -0,0 +1,27 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class RequestTypes(object):
CONNECT = 0
DISCONNECT = 1
PUBLISH = 2
SUBSCRIBE = 3
UNSUBSCRIBE = 4
class QueueableRequest(object):
def __init__(self, type, data):
self.type = type
self.data = data # Can be a tuple

View File

@@ -0,0 +1,296 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import time
import logging
from threading import Thread
from threading import Event
from AWSIoTPythonSDK.core.protocol.internal.events import EventTypes
from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids
from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus
from AWSIoTPythonSDK.core.protocol.internal.queues import OfflineRequestQueue
from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes
from AWSIoTPythonSDK.core.protocol.paho.client import topic_matches_sub
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_DRAINING_INTERNAL_SEC
class EventProducer(object):
_logger = logging.getLogger(__name__)
def __init__(self, cv, event_queue):
self._cv = cv
self._event_queue = event_queue
def on_connect(self, client, user_data, flags, rc):
self._add_to_queue(FixedEventMids.CONNACK_MID, EventTypes.CONNACK, rc)
self._logger.debug("Produced [connack] event")
def on_disconnect(self, client, user_data, rc):
self._add_to_queue(FixedEventMids.DISCONNECT_MID, EventTypes.DISCONNECT, rc)
self._logger.debug("Produced [disconnect] event")
def on_publish(self, client, user_data, mid):
self._add_to_queue(mid, EventTypes.PUBACK, None)
self._logger.debug("Produced [puback] event")
def on_subscribe(self, client, user_data, mid, granted_qos):
self._add_to_queue(mid, EventTypes.SUBACK, granted_qos)
self._logger.debug("Produced [suback] event")
def on_unsubscribe(self, client, user_data, mid):
self._add_to_queue(mid, EventTypes.UNSUBACK, None)
self._logger.debug("Produced [unsuback] event")
def on_message(self, client, user_data, message):
self._add_to_queue(FixedEventMids.MESSAGE_MID, EventTypes.MESSAGE, message)
self._logger.debug("Produced [message] event")
def _add_to_queue(self, mid, event_type, data):
with self._cv:
self._event_queue.put((mid, event_type, data))
self._cv.notify()
class EventConsumer(object):
MAX_DISPATCH_INTERNAL_SEC = 0.01
_logger = logging.getLogger(__name__)
def __init__(self, cv, event_queue, internal_async_client,
subscription_manager, offline_requests_manager, client_status):
self._cv = cv
self._event_queue = event_queue
self._internal_async_client = internal_async_client
self._subscription_manager = subscription_manager
self._offline_requests_manager = offline_requests_manager
self._client_status = client_status
self._is_running = False
self._draining_interval_sec = DEFAULT_DRAINING_INTERNAL_SEC
self._dispatch_methods = {
EventTypes.CONNACK : self._dispatch_connack,
EventTypes.DISCONNECT : self._dispatch_disconnect,
EventTypes.PUBACK : self._dispatch_puback,
EventTypes.SUBACK : self._dispatch_suback,
EventTypes.UNSUBACK : self._dispatch_unsuback,
EventTypes.MESSAGE : self._dispatch_message
}
self._offline_request_handlers = {
RequestTypes.PUBLISH : self._handle_offline_publish,
RequestTypes.SUBSCRIBE : self._handle_offline_subscribe,
RequestTypes.UNSUBSCRIBE : self._handle_offline_unsubscribe
}
self._stopper = Event()
def update_offline_requests_manager(self, offline_requests_manager):
self._offline_requests_manager = offline_requests_manager
def update_draining_interval_sec(self, draining_interval_sec):
self._draining_interval_sec = draining_interval_sec
def get_draining_interval_sec(self):
return self._draining_interval_sec
def is_running(self):
return self._is_running
def start(self):
self._stopper.clear()
self._is_running = True
dispatch_events = Thread(target=self._dispatch)
dispatch_events.daemon = True
dispatch_events.start()
self._logger.debug("Event consuming thread started")
def stop(self):
if self._is_running:
self._is_running = False
self._clean_up()
self._logger.debug("Event consuming thread stopped")
def _clean_up(self):
self._logger.debug("Cleaning up before stopping event consuming")
with self._event_queue.mutex:
self._event_queue.queue.clear()
self._logger.debug("Event queue cleared")
self._internal_async_client.stop_background_network_io()
self._logger.debug("Network thread stopped")
self._internal_async_client.clean_up_event_callbacks()
self._logger.debug("Event callbacks cleared")
def wait_until_it_stops(self, timeout_sec):
self._logger.debug("Waiting for event consumer to completely stop")
return self._stopper.wait(timeout=timeout_sec)
def is_fully_stopped(self):
return self._stopper.is_set()
def _dispatch(self):
while self._is_running:
with self._cv:
if self._event_queue.empty():
self._cv.wait(self.MAX_DISPATCH_INTERNAL_SEC)
else:
while not self._event_queue.empty():
self._dispatch_one()
self._stopper.set()
self._logger.debug("Exiting dispatching loop...")
def _dispatch_one(self):
mid, event_type, data = self._event_queue.get()
if mid:
self._dispatch_methods[event_type](mid, data)
self._internal_async_client.invoke_event_callback(mid, data=data)
# We need to make sure disconnect event gets dispatched and then we stop the consumer
if self._need_to_stop_dispatching(mid):
self.stop()
def _need_to_stop_dispatching(self, mid):
status = self._client_status.get_status()
return (ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status) \
and mid == FixedEventMids.DISCONNECT_MID
def _dispatch_connack(self, mid, rc):
status = self._client_status.get_status()
self._logger.debug("Dispatching [connack] event")
if self._need_recover():
if ClientStatus.STABLE != status: # To avoid multiple connack dispatching
self._logger.debug("Has recovery job")
clean_up_debt = Thread(target=self._clean_up_debt)
clean_up_debt.start()
else:
self._logger.debug("No need for recovery")
self._client_status.set_status(ClientStatus.STABLE)
def _need_recover(self):
return self._subscription_manager.list_records() or self._offline_requests_manager.has_more()
def _clean_up_debt(self):
self._handle_resubscribe()
self._handle_draining()
self._client_status.set_status(ClientStatus.STABLE)
def _handle_resubscribe(self):
subscriptions = self._subscription_manager.list_records()
if subscriptions and not self._has_user_disconnect_request():
self._logger.debug("Start resubscribing")
self._client_status.set_status(ClientStatus.RESUBSCRIBE)
for topic, (qos, message_callback, ack_callback) in subscriptions:
if self._has_user_disconnect_request():
self._logger.debug("User disconnect detected")
break
self._internal_async_client.subscribe(topic, qos, ack_callback)
def _handle_draining(self):
if self._offline_requests_manager.has_more() and not self._has_user_disconnect_request():
self._logger.debug("Start draining")
self._client_status.set_status(ClientStatus.DRAINING)
while self._offline_requests_manager.has_more():
if self._has_user_disconnect_request():
self._logger.debug("User disconnect detected")
break
offline_request = self._offline_requests_manager.get_next()
if offline_request:
self._offline_request_handlers[offline_request.type](offline_request)
time.sleep(self._draining_interval_sec)
def _has_user_disconnect_request(self):
return ClientStatus.USER_DISCONNECT == self._client_status.get_status()
def _dispatch_disconnect(self, mid, rc):
self._logger.debug("Dispatching [disconnect] event")
status = self._client_status.get_status()
if ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status:
pass
else:
self._client_status.set_status(ClientStatus.ABNORMAL_DISCONNECT)
# For puback, suback and unsuback, ack callback invocation is handled in dispatch_one
# Do nothing in the event dispatching itself
def _dispatch_puback(self, mid, rc):
self._logger.debug("Dispatching [puback] event")
def _dispatch_suback(self, mid, rc):
self._logger.debug("Dispatching [suback] event")
def _dispatch_unsuback(self, mid, rc):
self._logger.debug("Dispatching [unsuback] event")
def _dispatch_message(self, mid, message):
self._logger.debug("Dispatching [message] event")
subscriptions = self._subscription_manager.list_records()
if subscriptions:
for topic, (qos, message_callback, _) in subscriptions:
if topic_matches_sub(topic, message.topic) and message_callback:
message_callback(None, None, message) # message_callback(client, userdata, message)
def _handle_offline_publish(self, request):
topic, payload, qos, retain = request.data
self._internal_async_client.publish(topic, payload, qos, retain)
self._logger.debug("Processed offline publish request")
def _handle_offline_subscribe(self, request):
topic, qos, message_callback, ack_callback = request.data
self._subscription_manager.add_record(topic, qos, message_callback, ack_callback)
self._internal_async_client.subscribe(topic, qos, ack_callback)
self._logger.debug("Processed offline subscribe request")
def _handle_offline_unsubscribe(self, request):
topic, ack_callback = request.data
self._subscription_manager.remove_record(topic)
self._internal_async_client.unsubscribe(topic, ack_callback)
self._logger.debug("Processed offline unsubscribe request")
class SubscriptionManager(object):
_logger = logging.getLogger(__name__)
def __init__(self):
self._subscription_map = dict()
def add_record(self, topic, qos, message_callback, ack_callback):
self._logger.debug("Adding a new subscription record: %s qos: %d", topic, qos)
self._subscription_map[topic] = qos, message_callback, ack_callback # message_callback and/or ack_callback could be None
def remove_record(self, topic):
self._logger.debug("Removing subscription record: %s", topic)
if self._subscription_map.get(topic): # Ignore topics that are never subscribed to
del self._subscription_map[topic]
else:
self._logger.warn("Removing attempt for non-exist subscription record: %s", topic)
def list_records(self):
return list(self._subscription_map.items())
class OfflineRequestsManager(object):
_logger = logging.getLogger(__name__)
def __init__(self, max_size, drop_behavior):
self._queue = OfflineRequestQueue(max_size, drop_behavior)
def has_more(self):
return len(self._queue) > 0
def add_one(self, request):
return self._queue.append(request)
def get_next(self):
if self.has_more():
return self._queue.pop(0)
else:
return None

View File

@@ -0,0 +1,373 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import AWSIoTPythonSDK
from AWSIoTPythonSDK.core.protocol.internal.clients import InternalAsyncMqttClient
from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatusContainer
from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus
from AWSIoTPythonSDK.core.protocol.internal.workers import EventProducer
from AWSIoTPythonSDK.core.protocol.internal.workers import EventConsumer
from AWSIoTPythonSDK.core.protocol.internal.workers import SubscriptionManager
from AWSIoTPythonSDK.core.protocol.internal.workers import OfflineRequestsManager
from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes
from AWSIoTPythonSDK.core.protocol.internal.requests import QueueableRequest
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_OPERATION_TIMEOUT_SEC
from AWSIoTPythonSDK.core.protocol.internal.defaults import METRICS_PREFIX
from AWSIoTPythonSDK.core.protocol.internal.defaults import ALPN_PROTCOLS
from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids
from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS
from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueFullException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueDisabledException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueFullException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueDisabledException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueFullException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueDisabledException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeTimeoutException
from AWSIoTPythonSDK.core.protocol.internal.queues import AppendResults
from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes
from AWSIoTPythonSDK.core.protocol.paho.client import MQTTv31
from threading import Condition
from threading import Event
import logging
import sys
if sys.version_info[0] < 3:
from Queue import Queue
else:
from queue import Queue
class MqttCore(object):
_logger = logging.getLogger(__name__)
def __init__(self, client_id, clean_session, protocol, use_wss):
self._use_wss = use_wss
self._username = ""
self._password = None
self._enable_metrics_collection = True
self._event_queue = Queue()
self._event_cv = Condition()
self._event_producer = EventProducer(self._event_cv, self._event_queue)
self._client_status = ClientStatusContainer()
self._internal_async_client = InternalAsyncMqttClient(client_id, clean_session, protocol, use_wss)
self._subscription_manager = SubscriptionManager()
self._offline_requests_manager = OfflineRequestsManager(-1, DropBehaviorTypes.DROP_NEWEST) # Infinite queue
self._event_consumer = EventConsumer(self._event_cv,
self._event_queue,
self._internal_async_client,
self._subscription_manager,
self._offline_requests_manager,
self._client_status)
self._connect_disconnect_timeout_sec = DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
self._operation_timeout_sec = DEFAULT_OPERATION_TIMEOUT_SEC
self._init_offline_request_exceptions()
self._init_workers()
self._logger.info("MqttCore initialized")
self._logger.info("Client id: %s" % client_id)
self._logger.info("Protocol version: %s" % ("MQTTv3.1" if protocol == MQTTv31 else "MQTTv3.1.1"))
self._logger.info("Authentication type: %s" % ("SigV4 WebSocket" if use_wss else "TLSv1.2 certificate based Mutual Auth."))
def _init_offline_request_exceptions(self):
self._offline_request_queue_disabled_exceptions = {
RequestTypes.PUBLISH : publishQueueDisabledException(),
RequestTypes.SUBSCRIBE : subscribeQueueDisabledException(),
RequestTypes.UNSUBSCRIBE : unsubscribeQueueDisabledException()
}
self._offline_request_queue_full_exceptions = {
RequestTypes.PUBLISH : publishQueueFullException(),
RequestTypes.SUBSCRIBE : subscribeQueueFullException(),
RequestTypes.UNSUBSCRIBE : unsubscribeQueueFullException()
}
def _init_workers(self):
self._internal_async_client.register_internal_event_callbacks(self._event_producer.on_connect,
self._event_producer.on_disconnect,
self._event_producer.on_publish,
self._event_producer.on_subscribe,
self._event_producer.on_unsubscribe,
self._event_producer.on_message)
def _start_workers(self):
self._event_consumer.start()
def use_wss(self):
return self._use_wss
# Used for general message event reception
def on_message(self, message):
pass
# Used for general online event notification
def on_online(self):
pass
# Used for general offline event notification
def on_offline(self):
pass
def configure_cert_credentials(self, cert_credentials_provider):
self._logger.info("Configuring certificates...")
self._internal_async_client.set_cert_credentials_provider(cert_credentials_provider)
def configure_iam_credentials(self, iam_credentials_provider):
self._logger.info("Configuring custom IAM credentials...")
self._internal_async_client.set_iam_credentials_provider(iam_credentials_provider)
def configure_endpoint(self, endpoint_provider):
self._logger.info("Configuring endpoint...")
self._internal_async_client.set_endpoint_provider(endpoint_provider)
def configure_connect_disconnect_timeout_sec(self, connect_disconnect_timeout_sec):
self._logger.info("Configuring connect/disconnect time out: %f sec" % connect_disconnect_timeout_sec)
self._connect_disconnect_timeout_sec = connect_disconnect_timeout_sec
def configure_operation_timeout_sec(self, operation_timeout_sec):
self._logger.info("Configuring MQTT operation time out: %f sec" % operation_timeout_sec)
self._operation_timeout_sec = operation_timeout_sec
def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec):
self._logger.info("Configuring reconnect back off timing...")
self._logger.info("Base quiet time: %f sec" % base_reconnect_quiet_sec)
self._logger.info("Max quiet time: %f sec" % max_reconnect_quiet_sec)
self._logger.info("Stable connection time: %f sec" % stable_connection_sec)
self._internal_async_client.configure_reconnect_back_off(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec)
def configure_alpn_protocols(self):
self._logger.info("Configuring alpn protocols...")
self._internal_async_client.configure_alpn_protocols([ALPN_PROTCOLS])
def configure_last_will(self, topic, payload, qos, retain=False):
self._logger.info("Configuring last will...")
self._internal_async_client.configure_last_will(topic, payload, qos, retain)
def clear_last_will(self):
self._logger.info("Clearing last will...")
self._internal_async_client.clear_last_will()
def configure_username_password(self, username, password=None):
self._logger.info("Configuring username and password...")
self._username = username
self._password = password
def configure_socket_factory(self, socket_factory):
self._logger.info("Configuring socket factory...")
self._internal_async_client.set_socket_factory(socket_factory)
def enable_metrics_collection(self):
self._enable_metrics_collection = True
def disable_metrics_collection(self):
self._enable_metrics_collection = False
def configure_offline_requests_queue(self, max_size, drop_behavior):
self._logger.info("Configuring offline requests queueing: max queue size: %d", max_size)
self._offline_requests_manager = OfflineRequestsManager(max_size, drop_behavior)
self._event_consumer.update_offline_requests_manager(self._offline_requests_manager)
def configure_draining_interval_sec(self, draining_interval_sec):
self._logger.info("Configuring offline requests queue draining interval: %f sec", draining_interval_sec)
self._event_consumer.update_draining_interval_sec(draining_interval_sec)
def connect(self, keep_alive_sec):
self._logger.info("Performing sync connect...")
event = Event()
self.connect_async(keep_alive_sec, self._create_blocking_ack_callback(event))
if not event.wait(self._connect_disconnect_timeout_sec):
self._logger.error("Connect timed out")
raise connectTimeoutException()
return True
def connect_async(self, keep_alive_sec, ack_callback=None):
self._logger.info("Performing async connect...")
self._logger.info("Keep-alive: %f sec" % keep_alive_sec)
self._start_workers()
self._load_callbacks()
self._load_username_password()
try:
self._client_status.set_status(ClientStatus.CONNECT)
rc = self._internal_async_client.connect(keep_alive_sec, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Connect error: %d", rc)
raise connectError(rc)
except Exception as e:
# Provided any error in connect, we should clean up the threads that have been created
self._event_consumer.stop()
if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec):
self._logger.error("Time out in waiting for event consumer to stop")
else:
self._logger.debug("Event consumer stopped")
self._client_status.set_status(ClientStatus.IDLE)
raise e
return FixedEventMids.CONNACK_MID
def _load_callbacks(self):
self._logger.debug("Passing in general notification callbacks to internal client...")
self._internal_async_client.on_online = self.on_online
self._internal_async_client.on_offline = self.on_offline
self._internal_async_client.on_message = self.on_message
def _load_username_password(self):
username_candidate = self._username
if self._enable_metrics_collection:
username_candidate += METRICS_PREFIX
username_candidate += AWSIoTPythonSDK.__version__
self._internal_async_client.set_username_password(username_candidate, self._password)
def disconnect(self):
self._logger.info("Performing sync disconnect...")
event = Event()
self.disconnect_async(self._create_blocking_ack_callback(event))
if not event.wait(self._connect_disconnect_timeout_sec):
self._logger.error("Disconnect timed out")
raise disconnectTimeoutException()
if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec):
self._logger.error("Disconnect timed out in waiting for event consumer")
raise disconnectTimeoutException()
return True
def disconnect_async(self, ack_callback=None):
self._logger.info("Performing async disconnect...")
self._client_status.set_status(ClientStatus.USER_DISCONNECT)
rc = self._internal_async_client.disconnect(ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Disconnect error: %d", rc)
raise disconnectError(rc)
return FixedEventMids.DISCONNECT_MID
def publish(self, topic, payload, qos, retain=False):
self._logger.info("Performing sync publish...")
ret = False
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain))
else:
if qos > 0:
event = Event()
rc, mid = self._publish_async(topic, payload, qos, retain, self._create_blocking_ack_callback(event))
if not event.wait(self._operation_timeout_sec):
self._internal_async_client.remove_event_callback(mid)
self._logger.error("Publish timed out")
raise publishTimeoutException()
else:
self._publish_async(topic, payload, qos, retain)
ret = True
return ret
def publish_async(self, topic, payload, qos, retain=False, ack_callback=None):
self._logger.info("Performing async publish...")
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain))
return FixedEventMids.QUEUED_MID
else:
rc, mid = self._publish_async(topic, payload, qos, retain, ack_callback)
return mid
def _publish_async(self, topic, payload, qos, retain=False, ack_callback=None):
rc, mid = self._internal_async_client.publish(topic, payload, qos, retain, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Publish error: %d", rc)
raise publishError(rc)
return rc, mid
def subscribe(self, topic, qos, message_callback=None):
self._logger.info("Performing sync subscribe...")
ret = False
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, None))
else:
event = Event()
rc, mid = self._subscribe_async(topic, qos, self._create_blocking_ack_callback(event), message_callback)
if not event.wait(self._operation_timeout_sec):
self._internal_async_client.remove_event_callback(mid)
self._logger.error("Subscribe timed out")
raise subscribeTimeoutException()
ret = True
return ret
def subscribe_async(self, topic, qos, ack_callback=None, message_callback=None):
self._logger.info("Performing async subscribe...")
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, ack_callback))
return FixedEventMids.QUEUED_MID
else:
rc, mid = self._subscribe_async(topic, qos, ack_callback, message_callback)
return mid
def _subscribe_async(self, topic, qos, ack_callback=None, message_callback=None):
self._subscription_manager.add_record(topic, qos, message_callback, ack_callback)
rc, mid = self._internal_async_client.subscribe(topic, qos, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Subscribe error: %d", rc)
raise subscribeError(rc)
return rc, mid
def unsubscribe(self, topic):
self._logger.info("Performing sync unsubscribe...")
ret = False
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, None))
else:
event = Event()
rc, mid = self._unsubscribe_async(topic, self._create_blocking_ack_callback(event))
if not event.wait(self._operation_timeout_sec):
self._internal_async_client.remove_event_callback(mid)
self._logger.error("Unsubscribe timed out")
raise unsubscribeTimeoutException()
ret = True
return ret
def unsubscribe_async(self, topic, ack_callback=None):
self._logger.info("Performing async unsubscribe...")
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, ack_callback))
return FixedEventMids.QUEUED_MID
else:
rc, mid = self._unsubscribe_async(topic, ack_callback)
return mid
def _unsubscribe_async(self, topic, ack_callback=None):
self._subscription_manager.remove_record(topic)
rc, mid = self._internal_async_client.unsubscribe(topic, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Unsubscribe error: %d", rc)
raise unsubscribeError(rc)
return rc, mid
def _create_blocking_ack_callback(self, event):
def ack_callback(mid, data=None):
event.set()
return ack_callback
def _handle_offline_request(self, type, data):
self._logger.info("Offline request detected!")
offline_request = QueueableRequest(type, data)
append_result = self._offline_requests_manager.add_one(offline_request)
if AppendResults.APPEND_FAILURE_QUEUE_DISABLED == append_result:
self._logger.error("Offline request queue has been disabled")
raise self._offline_request_queue_disabled_exceptions[type]
if AppendResults.APPEND_FAILURE_QUEUE_FULL == append_result:
self._logger.error("Offline request queue is full")
raise self._offline_request_queue_full_exceptions[type]

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,430 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import json
import logging
import uuid
from threading import Timer, Lock, Thread
class _shadowRequestToken:
URN_PREFIX_LENGTH = 9
def getNextToken(self):
return uuid.uuid4().urn[self.URN_PREFIX_LENGTH:] # We only need the uuid digits, not the urn prefix
class _basicJSONParser:
def setString(self, srcString):
self._rawString = srcString
self._dictionObject = None
def regenerateString(self):
return json.dumps(self._dictionaryObject)
def getAttributeValue(self, srcAttributeKey):
return self._dictionaryObject.get(srcAttributeKey)
def setAttributeValue(self, srcAttributeKey, srcAttributeValue):
self._dictionaryObject[srcAttributeKey] = srcAttributeValue
def validateJSON(self):
try:
self._dictionaryObject = json.loads(self._rawString)
except ValueError:
return False
return True
class deviceShadow:
_logger = logging.getLogger(__name__)
def __init__(self, srcShadowName, srcIsPersistentSubscribe, srcShadowManager):
"""
The class that denotes a local/client-side device shadow instance.
Users can perform shadow operations on this instance to retrieve and modify the
corresponding shadow JSON document in AWS IoT Cloud. The following shadow operations
are available:
- Get
- Update
- Delete
- Listen on delta
- Cancel listening on delta
This is returned from :code:`AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTShadowClient.createShadowWithName` function call.
No need to call directly from user scripts.
"""
if srcShadowName is None or srcIsPersistentSubscribe is None or srcShadowManager is None:
raise TypeError("None type inputs detected.")
self._shadowName = srcShadowName
# Tool handler
self._shadowManagerHandler = srcShadowManager
self._basicJSONParserHandler = _basicJSONParser()
self._tokenHandler = _shadowRequestToken()
# Properties
self._isPersistentSubscribe = srcIsPersistentSubscribe
self._lastVersionInSync = -1 # -1 means not initialized
self._isGetSubscribed = False
self._isUpdateSubscribed = False
self._isDeleteSubscribed = False
self._shadowSubscribeCallbackTable = dict()
self._shadowSubscribeCallbackTable["get"] = None
self._shadowSubscribeCallbackTable["delete"] = None
self._shadowSubscribeCallbackTable["update"] = None
self._shadowSubscribeCallbackTable["delta"] = None
self._shadowSubscribeStatusTable = dict()
self._shadowSubscribeStatusTable["get"] = 0
self._shadowSubscribeStatusTable["delete"] = 0
self._shadowSubscribeStatusTable["update"] = 0
self._tokenPool = dict()
self._dataStructureLock = Lock()
def _doNonPersistentUnsubscribe(self, currentAction):
self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, currentAction)
self._logger.info("Unsubscribed to " + currentAction + " accepted/rejected topics for deviceShadow: " + self._shadowName)
def generalCallback(self, client, userdata, message):
# In Py3.x, message.payload comes in as a bytes(string)
# json.loads needs a string input
with self._dataStructureLock:
currentTopic = message.topic
currentAction = self._parseTopicAction(currentTopic) # get/delete/update/delta
currentType = self._parseTopicType(currentTopic) # accepted/rejected/delta
payloadUTF8String = message.payload.decode('utf-8')
# get/delete/update: Need to deal with token, timer and unsubscribe
if currentAction in ["get", "delete", "update"]:
# Check for token
self._basicJSONParserHandler.setString(payloadUTF8String)
if self._basicJSONParserHandler.validateJSON(): # Filter out invalid JSON
currentToken = self._basicJSONParserHandler.getAttributeValue(u"clientToken")
if currentToken is not None:
self._logger.debug("shadow message clientToken: " + currentToken)
if currentToken is not None and currentToken in self._tokenPool.keys(): # Filter out JSON without the desired token
# Sync local version when it is an accepted response
self._logger.debug("Token is in the pool. Type: " + currentType)
if currentType == "accepted":
incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version")
# If it is get/update accepted response, we need to sync the local version
if incomingVersion is not None and incomingVersion > self._lastVersionInSync and currentAction != "delete":
self._lastVersionInSync = incomingVersion
# If it is a delete accepted, we need to reset the version
else:
self._lastVersionInSync = -1 # The version will always be synced for the next incoming delta/GU-accepted response
# Cancel the timer and clear the token
self._tokenPool[currentToken].cancel()
del self._tokenPool[currentToken]
# Need to unsubscribe?
self._shadowSubscribeStatusTable[currentAction] -= 1
if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(currentAction) <= 0:
self._shadowSubscribeStatusTable[currentAction] = 0
processNonPersistentUnsubscribe = Thread(target=self._doNonPersistentUnsubscribe, args=[currentAction])
processNonPersistentUnsubscribe.start()
# Custom callback
if self._shadowSubscribeCallbackTable.get(currentAction) is not None:
processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, currentToken])
processCustomCallback.start()
# delta: Watch for version
else:
currentType += "/" + self._parseTopicShadowName(currentTopic)
# Sync local version
self._basicJSONParserHandler.setString(payloadUTF8String)
if self._basicJSONParserHandler.validateJSON(): # Filter out JSON without version
incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version")
if incomingVersion is not None and incomingVersion > self._lastVersionInSync:
self._lastVersionInSync = incomingVersion
# Custom callback
if self._shadowSubscribeCallbackTable.get(currentAction) is not None:
processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, None])
processCustomCallback.start()
def _parseTopicAction(self, srcTopic):
ret = None
fragments = srcTopic.split('/')
if fragments[5] == "delta":
ret = "delta"
else:
ret = fragments[4]
return ret
def _parseTopicType(self, srcTopic):
fragments = srcTopic.split('/')
return fragments[5]
def _parseTopicShadowName(self, srcTopic):
fragments = srcTopic.split('/')
return fragments[2]
def _timerHandler(self, srcActionName, srcToken):
with self._dataStructureLock:
# Don't crash if we try to remove an unknown token
if srcToken not in self._tokenPool:
self._logger.warn('Tried to remove non-existent token from pool: %s' % str(srcToken))
return
# Remove the token
del self._tokenPool[srcToken]
# Need to unsubscribe?
self._shadowSubscribeStatusTable[srcActionName] -= 1
if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(srcActionName) <= 0:
self._shadowSubscribeStatusTable[srcActionName] = 0
self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, srcActionName)
# Notify time-out issue
if self._shadowSubscribeCallbackTable.get(srcActionName) is not None:
self._logger.info("Shadow request with token: " + str(srcToken) + " has timed out.")
self._shadowSubscribeCallbackTable[srcActionName]("REQUEST TIME OUT", "timeout", srcToken)
def shadowGet(self, srcCallback, srcTimeout):
"""
**Description**
Retrieve the device shadow JSON document from AWS IoT by publishing an empty JSON document to the
corresponding shadow topics. Shadow response topics will be subscribed to receive responses from
AWS IoT regarding the result of the get operation. Retrieved shadow JSON document will be available
in the registered callback. If no response is received within the provided timeout, a timeout
notification will be passed into the registered callback.
**Syntax**
.. code:: python
# Retrieve the shadow JSON document from AWS IoT, with a timeout set to 5 seconds
BotShadow.shadowGet(customCallback, 5)
**Parameters**
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
*srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout,
a timeout notification will be generated and put into the registered callback to notify users.
**Returns**
The token used for tracing in this shadow request.
"""
with self._dataStructureLock:
# Update callback data structure
self._shadowSubscribeCallbackTable["get"] = srcCallback
# Update number of pending feedback
self._shadowSubscribeStatusTable["get"] += 1
# clientToken
currentToken = self._tokenHandler.getNextToken()
self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["get", currentToken])
self._basicJSONParserHandler.setString("{}")
self._basicJSONParserHandler.validateJSON()
self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken)
currentPayload = self._basicJSONParserHandler.regenerateString()
# Two subscriptions
if not self._isPersistentSubscribe or not self._isGetSubscribed:
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "get", self.generalCallback)
self._isGetSubscribed = True
self._logger.info("Subscribed to get accepted/rejected topics for deviceShadow: " + self._shadowName)
# One publish
self._shadowManagerHandler.basicShadowPublish(self._shadowName, "get", currentPayload)
# Start the timer
self._tokenPool[currentToken].start()
return currentToken
def shadowDelete(self, srcCallback, srcTimeout):
"""
**Description**
Delete the device shadow from AWS IoT by publishing an empty JSON document to the corresponding
shadow topics. Shadow response topics will be subscribed to receive responses from AWS IoT
regarding the result of the get operation. Responses will be available in the registered callback.
If no response is received within the provided timeout, a timeout notification will be passed into
the registered callback.
**Syntax**
.. code:: python
# Delete the device shadow from AWS IoT, with a timeout set to 5 seconds
BotShadow.shadowDelete(customCallback, 5)
**Parameters**
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
*srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout,
a timeout notification will be generated and put into the registered callback to notify users.
**Returns**
The token used for tracing in this shadow request.
"""
with self._dataStructureLock:
# Update callback data structure
self._shadowSubscribeCallbackTable["delete"] = srcCallback
# Update number of pending feedback
self._shadowSubscribeStatusTable["delete"] += 1
# clientToken
currentToken = self._tokenHandler.getNextToken()
self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["delete", currentToken])
self._basicJSONParserHandler.setString("{}")
self._basicJSONParserHandler.validateJSON()
self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken)
currentPayload = self._basicJSONParserHandler.regenerateString()
# Two subscriptions
if not self._isPersistentSubscribe or not self._isDeleteSubscribed:
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delete", self.generalCallback)
self._isDeleteSubscribed = True
self._logger.info("Subscribed to delete accepted/rejected topics for deviceShadow: " + self._shadowName)
# One publish
self._shadowManagerHandler.basicShadowPublish(self._shadowName, "delete", currentPayload)
# Start the timer
self._tokenPool[currentToken].start()
return currentToken
def shadowUpdate(self, srcJSONPayload, srcCallback, srcTimeout):
"""
**Description**
Update the device shadow JSON document string from AWS IoT by publishing the provided JSON
document to the corresponding shadow topics. Shadow response topics will be subscribed to
receive responses from AWS IoT regarding the result of the get operation. Response will be
available in the registered callback. If no response is received within the provided timeout,
a timeout notification will be passed into the registered callback.
**Syntax**
.. code:: python
# Update the shadow JSON document from AWS IoT, with a timeout set to 5 seconds
BotShadow.shadowUpdate(newShadowJSONDocumentString, customCallback, 5)
**Parameters**
*srcJSONPayload* - JSON document string used to update shadow JSON document in AWS IoT.
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
*srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout,
a timeout notification will be generated and put into the registered callback to notify users.
**Returns**
The token used for tracing in this shadow request.
"""
# Validate JSON
self._basicJSONParserHandler.setString(srcJSONPayload)
if self._basicJSONParserHandler.validateJSON():
with self._dataStructureLock:
# clientToken
currentToken = self._tokenHandler.getNextToken()
self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["update", currentToken])
self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken)
JSONPayloadWithToken = self._basicJSONParserHandler.regenerateString()
# Update callback data structure
self._shadowSubscribeCallbackTable["update"] = srcCallback
# Update number of pending feedback
self._shadowSubscribeStatusTable["update"] += 1
# Two subscriptions
if not self._isPersistentSubscribe or not self._isUpdateSubscribed:
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "update", self.generalCallback)
self._isUpdateSubscribed = True
self._logger.info("Subscribed to update accepted/rejected topics for deviceShadow: " + self._shadowName)
# One publish
self._shadowManagerHandler.basicShadowPublish(self._shadowName, "update", JSONPayloadWithToken)
# Start the timer
self._tokenPool[currentToken].start()
else:
raise ValueError("Invalid JSON file.")
return currentToken
def shadowRegisterDeltaCallback(self, srcCallback):
"""
**Description**
Listen on delta topics for this device shadow by subscribing to delta topics. Whenever there
is a difference between the desired and reported state, the registered callback will be called
and the delta payload will be available in the callback.
**Syntax**
.. code:: python
# Listen on delta topics for BotShadow
BotShadow.shadowRegisterDeltaCallback(customCallback)
**Parameters**
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
**Returns**
None
"""
with self._dataStructureLock:
# Update callback data structure
self._shadowSubscribeCallbackTable["delta"] = srcCallback
# One subscription
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delta", self.generalCallback)
self._logger.info("Subscribed to delta topic for deviceShadow: " + self._shadowName)
def shadowUnregisterDeltaCallback(self):
"""
**Description**
Cancel listening on delta topics for this device shadow by unsubscribing to delta topics. There will
be no delta messages received after this API call even though there is a difference between the
desired and reported state.
**Syntax**
.. code:: python
# Cancel listening on delta topics for BotShadow
BotShadow.shadowUnregisterDeltaCallback()
**Parameters**
None
**Returns**
None
"""
with self._dataStructureLock:
# Update callback data structure
del self._shadowSubscribeCallbackTable["delta"]
# One unsubscription
self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, "delta")
self._logger.info("Unsubscribed to delta topics for deviceShadow: " + self._shadowName)

View File

@@ -0,0 +1,83 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import logging
import time
from threading import Lock
class _shadowAction:
_actionType = ["get", "update", "delete", "delta"]
def __init__(self, srcShadowName, srcActionName):
if srcActionName is None or srcActionName not in self._actionType:
raise TypeError("Unsupported shadow action.")
self._shadowName = srcShadowName
self._actionName = srcActionName
self.isDelta = srcActionName == "delta"
if self.isDelta:
self._topicDelta = "$aws/things/" + str(self._shadowName) + "/shadow/update/delta"
else:
self._topicGeneral = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName)
self._topicAccept = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/accepted"
self._topicReject = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/rejected"
def getTopicGeneral(self):
return self._topicGeneral
def getTopicAccept(self):
return self._topicAccept
def getTopicReject(self):
return self._topicReject
def getTopicDelta(self):
return self._topicDelta
class shadowManager:
_logger = logging.getLogger(__name__)
def __init__(self, srcMQTTCore):
# Load in mqttCore
if srcMQTTCore is None:
raise TypeError("None type inputs detected.")
self._mqttCoreHandler = srcMQTTCore
self._shadowSubUnsubOperationLock = Lock()
def basicShadowPublish(self, srcShadowName, srcShadowAction, srcPayload):
currentShadowAction = _shadowAction(srcShadowName, srcShadowAction)
self._mqttCoreHandler.publish(currentShadowAction.getTopicGeneral(), srcPayload, 0, False)
def basicShadowSubscribe(self, srcShadowName, srcShadowAction, srcCallback):
with self._shadowSubUnsubOperationLock:
currentShadowAction = _shadowAction(srcShadowName, srcShadowAction)
if currentShadowAction.isDelta:
self._mqttCoreHandler.subscribe(currentShadowAction.getTopicDelta(), 0, srcCallback)
else:
self._mqttCoreHandler.subscribe(currentShadowAction.getTopicAccept(), 0, srcCallback)
self._mqttCoreHandler.subscribe(currentShadowAction.getTopicReject(), 0, srcCallback)
time.sleep(2)
def basicShadowUnsubscribe(self, srcShadowName, srcShadowAction):
with self._shadowSubUnsubOperationLock:
currentShadowAction = _shadowAction(srcShadowName, srcShadowAction)
if currentShadowAction.isDelta:
self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicDelta())
else:
self._logger.debug(currentShadowAction.getTopicAccept())
self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicAccept())
self._logger.debug(currentShadowAction.getTopicReject())
self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicReject())

View File

@@ -0,0 +1,19 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class DropBehaviorTypes(object):
DROP_OLDEST = 0
DROP_NEWEST = 1

View File

@@ -0,0 +1,92 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class CredentialsProvider(object):
def __init__(self):
self._ca_path = ""
def set_ca_path(self, ca_path):
self._ca_path = ca_path
def get_ca_path(self):
return self._ca_path
class CertificateCredentialsProvider(CredentialsProvider):
def __init__(self):
CredentialsProvider.__init__(self)
self._cert_path = ""
self._key_path = ""
def set_cert_path(self,cert_path):
self._cert_path = cert_path
def set_key_path(self, key_path):
self._key_path = key_path
def get_cert_path(self):
return self._cert_path
def get_key_path(self):
return self._key_path
class IAMCredentialsProvider(CredentialsProvider):
def __init__(self):
CredentialsProvider.__init__(self)
self._aws_access_key_id = ""
self._aws_secret_access_key = ""
self._aws_session_token = ""
def set_access_key_id(self, access_key_id):
self._aws_access_key_id = access_key_id
def set_secret_access_key(self, secret_access_key):
self._aws_secret_access_key = secret_access_key
def set_session_token(self, session_token):
self._aws_session_token = session_token
def get_access_key_id(self):
return self._aws_access_key_id
def get_secret_access_key(self):
return self._aws_secret_access_key
def get_session_token(self):
return self._aws_session_token
class EndpointProvider(object):
def __init__(self):
self._host = ""
self._port = -1
def set_host(self, host):
self._host = host
def set_port(self, port):
self._port = port
def get_host(self):
return self._host
def get_port(self):
return self._port

View File

@@ -0,0 +1,153 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import AWSIoTPythonSDK.exception.operationTimeoutException as operationTimeoutException
import AWSIoTPythonSDK.exception.operationError as operationError
# Serial Exception
class acceptTimeoutException(Exception):
def __init__(self, msg="Accept Timeout"):
self.message = msg
# MQTT Operation Timeout Exception
class connectTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Connect Timeout"):
self.message = msg
class disconnectTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Disconnect Timeout"):
self.message = msg
class publishTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Publish Timeout"):
self.message = msg
class subscribeTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Subscribe Timeout"):
self.message = msg
class unsubscribeTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Unsubscribe Timeout"):
self.message = msg
# MQTT Operation Error
class connectError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Connect Error: " + str(errorCode)
class disconnectError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Disconnect Error: " + str(errorCode)
class publishError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Publish Error: " + str(errorCode)
class publishQueueFullException(operationError.operationError):
def __init__(self):
self.message = "Internal Publish Queue Full"
class publishQueueDisabledException(operationError.operationError):
def __init__(self):
self.message = "Offline publish request dropped because queueing is disabled"
class subscribeError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Subscribe Error: " + str(errorCode)
class subscribeQueueFullException(operationError.operationError):
def __init__(self):
self.message = "Internal Subscribe Queue Full"
class subscribeQueueDisabledException(operationError.operationError):
def __init__(self):
self.message = "Offline subscribe request dropped because queueing is disabled"
class unsubscribeError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Unsubscribe Error: " + str(errorCode)
class unsubscribeQueueFullException(operationError.operationError):
def __init__(self):
self.message = "Internal Unsubscribe Queue Full"
class unsubscribeQueueDisabledException(operationError.operationError):
def __init__(self):
self.message = "Offline unsubscribe request dropped because queueing is disabled"
# Websocket Error
class wssNoKeyInEnvironmentError(operationError.operationError):
def __init__(self):
self.message = "No AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY detected in $ENV."
class wssHandShakeError(operationError.operationError):
def __init__(self):
self.message = "Error in WSS handshake."
# Greengrass Discovery Error
class DiscoveryDataNotFoundException(operationError.operationError):
def __init__(self):
self.message = "No discovery data found"
class DiscoveryTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, message="Discovery request timed out"):
self.message = message
class DiscoveryInvalidRequestException(operationError.operationError):
def __init__(self):
self.message = "Invalid discovery request"
class DiscoveryUnauthorizedException(operationError.operationError):
def __init__(self):
self.message = "Discovery request not authorized"
class DiscoveryThrottlingException(operationError.operationError):
def __init__(self):
self.message = "Too many discovery requests"
class DiscoveryFailure(operationError.operationError):
def __init__(self, message):
self.message = message
# Client Error
class ClientError(Exception):
def __init__(self, message):
self.message = message

View File

@@ -0,0 +1,19 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class operationError(Exception):
def __init__(self, msg="Operation Error"):
self.message = msg

View File

@@ -0,0 +1,19 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class operationTimeoutException(Exception):
def __init__(self, msg="Operation Timeout"):
self.message = msg

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
__version__ = "1.4.8"

View File

@@ -0,0 +1,466 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import json
KEY_GROUP_LIST = "GGGroups"
KEY_GROUP_ID = "GGGroupId"
KEY_CORE_LIST = "Cores"
KEY_CORE_ARN = "thingArn"
KEY_CA_LIST = "CAs"
KEY_CONNECTIVITY_INFO_LIST = "Connectivity"
KEY_CONNECTIVITY_INFO_ID = "Id"
KEY_HOST_ADDRESS = "HostAddress"
KEY_PORT_NUMBER = "PortNumber"
KEY_METADATA = "Metadata"
class ConnectivityInfo(object):
"""
Class the stores one set of the connectivity information.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, id, host, port, metadata):
self._id = id
self._host = host
self._port = port
self._metadata = metadata
@property
def id(self):
"""
Connectivity Information Id.
"""
return self._id
@property
def host(self):
"""
Host address.
"""
return self._host
@property
def port(self):
"""
Port number.
"""
return self._port
@property
def metadata(self):
"""
Metadata string.
"""
return self._metadata
class CoreConnectivityInfo(object):
"""
Class that stores the connectivity information for a Greengrass core.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, coreThingArn, groupId):
self._core_thing_arn = coreThingArn
self._group_id = groupId
self._connectivity_info_dict = dict()
@property
def coreThingArn(self):
"""
Thing arn for this Greengrass core.
"""
return self._core_thing_arn
@property
def groupId(self):
"""
Greengrass group id that this Greengrass core belongs to.
"""
return self._group_id
@property
def connectivityInfoList(self):
"""
The list of connectivity information that this Greengrass core has.
"""
return list(self._connectivity_info_dict.values())
def getConnectivityInfo(self, id):
"""
**Description**
Used for quickly accessing a certain set of connectivity information by id.
**Syntax**
.. code:: python
myCoreConnectivityInfo.getConnectivityInfo("CoolId")
**Parameters**
*id* - The id for the desired connectivity information.
**Return**
:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object.
"""
return self._connectivity_info_dict.get(id)
def appendConnectivityInfo(self, connectivityInfo):
"""
**Description**
Used for adding a new set of connectivity information to the list for this Greengrass core. This is used by the
SDK internally. No need to call directly from user scripts.
**Syntax**
.. code:: python
myCoreConnectivityInfo.appendConnectivityInfo(newInfo)
**Parameters**
*connectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object.
**Returns**
None
"""
self._connectivity_info_dict[connectivityInfo.id] = connectivityInfo
class GroupConnectivityInfo(object):
"""
Class that stores the connectivity information for a specific Greengrass group.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, groupId):
self._group_id = groupId
self._core_connectivity_info_dict = dict()
self._ca_list = list()
@property
def groupId(self):
"""
Id for this Greengrass group.
"""
return self._group_id
@property
def coreConnectivityInfoList(self):
"""
A list of Greengrass cores
(:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object) that belong to this
Greengrass group.
"""
return list(self._core_connectivity_info_dict.values())
@property
def caList(self):
"""
A list of CA content strings for this Greengrass group.
"""
return self._ca_list
def getCoreConnectivityInfo(self, coreThingArn):
"""
**Description**
Used to retrieve the corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo`
object by core thing arn.
**Syntax**
.. code:: python
myGroupConnectivityInfo.getCoreConnectivityInfo("YourOwnArnString")
**Parameters**
coreThingArn - Thing arn for the desired Greengrass core.
**Returns**
:code:`AWSIoTPythonSDK.core.greengrass.discovery.CoreConnectivityInfo` object.
"""
return self._core_connectivity_info_dict.get(coreThingArn)
def appendCoreConnectivityInfo(self, coreConnectivityInfo):
"""
**Description**
Used to append new core connectivity information to this group connectivity information. This is used by the
SDK internally. No need to call directly from user scripts.
**Syntax**
.. code:: python
myGroupConnectivityInfo.appendCoreConnectivityInfo(newCoreConnectivityInfo)
**Parameters**
*coreConnectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object.
**Returns**
None
"""
self._core_connectivity_info_dict[coreConnectivityInfo.coreThingArn] = coreConnectivityInfo
def appendCa(self, ca):
"""
**Description**
Used to append new CA content string to this group connectivity information. This is used by the SDK internally.
No need to call directly from user scripts.
**Syntax**
.. code:: python
myGroupConnectivityInfo.appendCa("CaContentString")
**Parameters**
*ca* - Group CA content string.
**Returns**
None
"""
self._ca_list.append(ca)
class DiscoveryInfo(object):
"""
Class that stores the discovery information coming back from the discovery request.
This is the data model for easy access to the discovery information from the discovery request function call. No
need to call directly from user scripts.
"""
def __init__(self, rawJson):
self._raw_json = rawJson
@property
def rawJson(self):
"""
JSON response string that contains the discovery information. This is reserved in case users want to do
some process by themselves.
"""
return self._raw_json
def getAllCores(self):
"""
**Description**
Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo`
object for this discovery information. The retrieved cores could be from different Greengrass groups. This is
designed for uses who want to iterate through all available cores at the same time, regardless of which group
those cores are in.
**Syntax**
.. code:: python
myDiscoveryInfo.getAllCores()
**Parameters**
None
**Returns**
List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivtyInfo` object.
"""
groups_list = self.getAllGroups()
core_list = list()
for group in groups_list:
core_list.extend(group.coreConnectivityInfoList)
return core_list
def getAllCas(self):
"""
**Description**
Used to retrieve the list of :code:`(groupId, caContent)` pair for this discovery information. The retrieved
pairs could be from different Greengrass groups. This is designed for users who want to iterate through all
available cores/groups/CAs at the same time, regardless of which group those CAs belong to.
**Syntax**
.. code:: python
myDiscoveryInfo.getAllCas()
**Parameters**
None
**Returns**
List of :code:`(groupId, caContent)` string pair, where :code:`caContent` is the CA content string and
:code:`groupId` is the group id that this CA belongs to.
"""
group_list = self.getAllGroups()
ca_list = list()
for group in group_list:
for ca in group.caList:
ca_list.append((group.groupId, ca))
return ca_list
def getAllGroups(self):
"""
**Description**
Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo`
object for this discovery information. This is designed for users who want to iterate through all available
groups that this Greengrass aware device (GGAD) belongs to.
**Syntax**
.. code:: python
myDiscoveryInfo.getAllGroups()
**Parameters**
None
**Returns**
List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object.
"""
groups_dict = self.toObjectAtGroupLevel()
return list(groups_dict.values())
def toObjectAtGroupLevel(self):
"""
**Description**
Used to get a dictionary of Greengrass group discovery information, with group id string as key and the
corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object as the
value. This is designed for users who know exactly which group, which core and which set of connectivity info
they want to use for the Greengrass aware device to connect.
**Syntax**
.. code:: python
# Get to the targeted connectivity information for a specific core in a specific group
groupLevelDiscoveryInfoObj = myDiscoveryInfo.toObjectAtGroupLevel()
groupConnectivityInfoObj = groupLevelDiscoveryInfoObj.toObjectAtGroupLevel("IKnowMyGroupId")
coreConnectivityInfoObj = groupConnectivityInfoObj.getCoreConnectivityInfo("IKnowMyCoreThingArn")
connectivityInfo = coreConnectivityInfoObj.getConnectivityInfo("IKnowMyConnectivityInfoSetId")
# Now retrieve the detailed information
caList = groupConnectivityInfoObj.caList
host = connectivityInfo.host
port = connectivityInfo.port
metadata = connectivityInfo.metadata
# Actual connecting logic follows...
"""
groups_object = json.loads(self._raw_json)
groups_dict = dict()
for group_object in groups_object[KEY_GROUP_LIST]:
group_info = self._decode_group_info(group_object)
groups_dict[group_info.groupId] = group_info
return groups_dict
def _decode_group_info(self, group_object):
group_id = group_object[KEY_GROUP_ID]
group_info = GroupConnectivityInfo(group_id)
for core in group_object[KEY_CORE_LIST]:
core_info = self._decode_core_info(core, group_id)
group_info.appendCoreConnectivityInfo(core_info)
for ca in group_object[KEY_CA_LIST]:
group_info.appendCa(ca)
return group_info
def _decode_core_info(self, core_object, group_id):
core_info = CoreConnectivityInfo(core_object[KEY_CORE_ARN], group_id)
for connectivity_info_object in core_object[KEY_CONNECTIVITY_INFO_LIST]:
connectivity_info = ConnectivityInfo(connectivity_info_object[KEY_CONNECTIVITY_INFO_ID],
connectivity_info_object[KEY_HOST_ADDRESS],
connectivity_info_object[KEY_PORT_NUMBER],
connectivity_info_object.get(KEY_METADATA,''))
core_info.appendConnectivityInfo(connectivity_info)
return core_info

View File

@@ -0,0 +1,426 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryInvalidRequestException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryUnauthorizedException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryDataNotFoundException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryThrottlingException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryFailure
from AWSIoTPythonSDK.core.greengrass.discovery.models import DiscoveryInfo
from AWSIoTPythonSDK.core.protocol.connection.alpn import SSLContextBuilder
import re
import sys
import ssl
import time
import errno
import logging
import socket
import platform
if platform.system() == 'Windows':
EAGAIN = errno.WSAEWOULDBLOCK
else:
EAGAIN = errno.EAGAIN
class DiscoveryInfoProvider(object):
REQUEST_TYPE_PREFIX = "GET "
PAYLOAD_PREFIX = "/greengrass/discover/thing/"
PAYLOAD_SUFFIX = " HTTP/1.1\r\n" # Space in the front
HOST_PREFIX = "Host: "
HOST_SUFFIX = "\r\n\r\n"
HTTP_PROTOCOL = r"HTTP/1.1 "
CONTENT_LENGTH = r"content-length: "
CONTENT_LENGTH_PATTERN = CONTENT_LENGTH + r"([0-9]+)\r\n"
HTTP_RESPONSE_CODE_PATTERN = HTTP_PROTOCOL + r"([0-9]+) "
HTTP_SC_200 = "200"
HTTP_SC_400 = "400"
HTTP_SC_401 = "401"
HTTP_SC_404 = "404"
HTTP_SC_429 = "429"
LOW_LEVEL_RC_COMPLETE = 0
LOW_LEVEL_RC_TIMEOUT = -1
_logger = logging.getLogger(__name__)
def __init__(self, caPath="", certPath="", keyPath="", host="", port=8443, timeoutSec=120):
"""
The class that provides functionality to perform a Greengrass discovery process to the cloud.
Users can perform Greengrass discovery process for a specific Greengrass aware device to retrieve
connectivity/identity information of Greengrass cores within the same group.
**Syntax**
.. code:: python
from AWSIoTPythonSDK.core.greengrass.discovery.providers import DiscoveryInfoProvider
# Create a discovery information provider
myDiscoveryInfoProvider = DiscoveryInfoProvider()
# Create a discovery information provider with custom configuration
myDiscoveryInfoProvider = DiscoveryInfoProvider(caPath=myCAPath, certPath=myCertPath, keyPath=myKeyPath, host=myHost, timeoutSec=myTimeoutSec)
**Parameters**
*caPath* - Path to read the root CA file.
*certPath* - Path to read the certificate file.
*keyPath* - Path to read the private key file.
*host* - String that denotes the host name of the user-specific AWS IoT endpoint.
*port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default.
*timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has
been timed out.
**Returns**
AWSIoTPythonSDK.core.greengrass.discovery.providers.DiscoveryInfoProvider object
"""
self._ca_path = caPath
self._cert_path = certPath
self._key_path = keyPath
self._host = host
self._port = port
self._timeout_sec = timeoutSec
self._expected_exception_map = {
self.HTTP_SC_400 : DiscoveryInvalidRequestException(),
self.HTTP_SC_401 : DiscoveryUnauthorizedException(),
self.HTTP_SC_404 : DiscoveryDataNotFoundException(),
self.HTTP_SC_429 : DiscoveryThrottlingException()
}
def configureEndpoint(self, host, port=8443):
"""
**Description**
Used to configure the host address and port number for the discovery request to hit. Should be called before
the discovery request happens.
**Syntax**
.. code:: python
# Using default port configuration, 8443
myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com")
# Customize port configuration
myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com", port=8888)
**Parameters**
*host* - String that denotes the host name of the user-specific AWS IoT endpoint.
*port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default.
**Returns**
None
"""
self._host = host
self._port = port
def configureCredentials(self, caPath, certPath, keyPath):
"""
**Description**
Used to configure the credentials for discovery request. Should be called before the discovery request happens.
**Syntax**
.. code:: python
myDiscoveryInfoProvider.configureCredentials("my/ca/path", "my/cert/path", "my/key/path")
**Parameters**
*caPath* - Path to read the root CA file.
*certPath* - Path to read the certificate file.
*keyPath* - Path to read the private key file.
**Returns**
None
"""
self._ca_path = caPath
self._cert_path = certPath
self._key_path = keyPath
def configureTimeout(self, timeoutSec):
"""
**Description**
Used to configure the time out in seconds for discovery request sending/response waiting. Should be called before
the discovery request happens.
**Syntax**
.. code:: python
# Configure the time out for discovery to be 10 seconds
myDiscoveryInfoProvider.configureTimeout(10)
**Parameters**
*timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has
been timed out.
**Returns**
None
"""
self._timeout_sec = timeoutSec
def discover(self, thingName):
"""
**Description**
Perform the discovery request for the given Greengrass aware device thing name.
**Syntax**
.. code:: python
myDiscoveryInfoProvider.discover(thingName="myGGAD")
**Parameters**
*thingName* - Greengrass aware device thing name.
**Returns**
:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.DiscoveryInfo` object.
"""
self._logger.info("Starting discover request...")
self._logger.info("Endpoint: " + self._host + ":" + str(self._port))
self._logger.info("Target thing: " + thingName)
sock = self._create_tcp_connection()
ssl_sock = self._create_ssl_connection(sock)
self._raise_on_timeout(self._send_discovery_request(ssl_sock, thingName))
status_code, response_body = self._receive_discovery_response(ssl_sock)
return self._raise_if_not_200(status_code, response_body)
def _create_tcp_connection(self):
self._logger.debug("Creating tcp connection...")
try:
if (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2):
sock = socket.create_connection((self._host, self._port))
else:
sock = socket.create_connection((self._host, self._port), source_address=("", 0))
return sock
except socket.error as err:
if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN:
raise
self._logger.debug("Created tcp connection.")
def _create_ssl_connection(self, sock):
self._logger.debug("Creating ssl connection...")
ssl_protocol_version = ssl.PROTOCOL_SSLv23
if self._port == 443:
ssl_context = SSLContextBuilder()\
.with_ca_certs(self._ca_path)\
.with_cert_key_pair(self._cert_path, self._key_path)\
.with_cert_reqs(ssl.CERT_REQUIRED)\
.with_check_hostname(True)\
.with_ciphers(None)\
.with_alpn_protocols(['x-amzn-http-ca'])\
.build()
ssl_sock = ssl_context.wrap_socket(sock, server_hostname=self._host, do_handshake_on_connect=False)
ssl_sock.do_handshake()
else:
ssl_sock = ssl.wrap_socket(sock,
certfile=self._cert_path,
keyfile=self._key_path,
ca_certs=self._ca_path,
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl_protocol_version)
self._logger.debug("Matching host name...")
if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and sys.version_info[1] < 2):
self._tls_match_hostname(ssl_sock)
else:
ssl.match_hostname(ssl_sock.getpeercert(), self._host)
return ssl_sock
def _tls_match_hostname(self, ssl_sock):
try:
cert = ssl_sock.getpeercert()
except AttributeError:
# the getpeercert can throw Attribute error: object has no attribute 'peer_certificate'
# Don't let that crash the whole client. See also: http://bugs.python.org/issue13721
raise ssl.SSLError('Not connected')
san = cert.get('subjectAltName')
if san:
have_san_dns = False
for (key, value) in san:
if key == 'DNS':
have_san_dns = True
if self._host_matches_cert(self._host.lower(), value.lower()) == True:
return
if key == 'IP Address':
have_san_dns = True
if value.lower() == self._host.lower():
return
if have_san_dns:
# Only check subject if subjectAltName dns not found.
raise ssl.SSLError('Certificate subject does not match remote hostname.')
subject = cert.get('subject')
if subject:
for ((key, value),) in subject:
if key == 'commonName':
if self._host_matches_cert(self._host.lower(), value.lower()) == True:
return
raise ssl.SSLError('Certificate subject does not match remote hostname.')
def _host_matches_cert(self, host, cert_host):
if cert_host[0:2] == "*.":
if cert_host.count("*") != 1:
return False
host_match = host.split(".", 1)[1]
cert_match = cert_host.split(".", 1)[1]
if host_match == cert_match:
return True
else:
return False
else:
if host == cert_host:
return True
else:
return False
def _send_discovery_request(self, ssl_sock, thing_name):
request = self.REQUEST_TYPE_PREFIX + \
self.PAYLOAD_PREFIX + \
thing_name + \
self.PAYLOAD_SUFFIX + \
self.HOST_PREFIX + \
self._host + ":" + str(self._port) + \
self.HOST_SUFFIX
self._logger.debug("Sending discover request: " + request)
start_time = time.time()
desired_length_to_write = len(request)
actual_length_written = 0
while True:
try:
length_written = ssl_sock.write(request.encode("utf-8"))
actual_length_written += length_written
except socket.error as err:
if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE:
pass
if actual_length_written == desired_length_to_write:
return self.LOW_LEVEL_RC_COMPLETE
if start_time + self._timeout_sec < time.time():
return self.LOW_LEVEL_RC_TIMEOUT
def _receive_discovery_response(self, ssl_sock):
self._logger.debug("Receiving discover response header...")
rc1, response_header = self._receive_until(ssl_sock, self._got_two_crlfs)
status_code, body_length = self._handle_discovery_response_header(rc1, response_header.decode("utf-8"))
self._logger.debug("Receiving discover response body...")
rc2, response_body = self._receive_until(ssl_sock, self._got_enough_bytes, body_length)
response_body = self._handle_discovery_response_body(rc2, response_body.decode("utf-8"))
return status_code, response_body
def _receive_until(self, ssl_sock, criteria_function, extra_data=None):
start_time = time.time()
response = bytearray()
number_bytes_read = 0
while True: # Python does not have do-while
try:
response.append(self._convert_to_int_py3(ssl_sock.read(1)))
number_bytes_read += 1
except socket.error as err:
if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE:
pass
if criteria_function((number_bytes_read, response, extra_data)):
return self.LOW_LEVEL_RC_COMPLETE, response
if start_time + self._timeout_sec < time.time():
return self.LOW_LEVEL_RC_TIMEOUT, response
def _convert_to_int_py3(self, input_char):
try:
return ord(input_char)
except:
return input_char
def _got_enough_bytes(self, data):
number_bytes_read, response, target_length = data
return number_bytes_read == int(target_length)
def _got_two_crlfs(self, data):
number_bytes_read, response, extra_data_unused = data
number_of_crlf = 2
has_enough_bytes = number_bytes_read > number_of_crlf * 2 - 1
if has_enough_bytes:
end_of_received = response[number_bytes_read - number_of_crlf * 2 : number_bytes_read]
expected_end_of_response = b"\r\n" * number_of_crlf
return end_of_received == expected_end_of_response
else:
return False
def _handle_discovery_response_header(self, rc, response):
self._raise_on_timeout(rc)
http_status_code_matcher = re.compile(self.HTTP_RESPONSE_CODE_PATTERN)
http_status_code_matched_groups = http_status_code_matcher.match(response)
content_length_matcher = re.compile(self.CONTENT_LENGTH_PATTERN)
content_length_matched_groups = content_length_matcher.search(response)
return http_status_code_matched_groups.group(1), content_length_matched_groups.group(1)
def _handle_discovery_response_body(self, rc, response):
self._raise_on_timeout(rc)
return response
def _raise_on_timeout(self, rc):
if rc == self.LOW_LEVEL_RC_TIMEOUT:
raise DiscoveryTimeoutException()
def _raise_if_not_200(self, status_code, response_body): # response_body here is str in Py3
if status_code != self.HTTP_SC_200:
expected_exception = self._expected_exception_map.get(status_code)
if expected_exception:
raise expected_exception
else:
raise DiscoveryFailure(response_body)
return DiscoveryInfo(response_body)

View File

@@ -0,0 +1,156 @@
# /*
# * Copyright 2010-2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import json
_BASE_THINGS_TOPIC = "$aws/things/"
_NOTIFY_OPERATION = "notify"
_NOTIFY_NEXT_OPERATION = "notify-next"
_GET_OPERATION = "get"
_START_NEXT_OPERATION = "start-next"
_WILDCARD_OPERATION = "+"
_UPDATE_OPERATION = "update"
_ACCEPTED_REPLY = "accepted"
_REJECTED_REPLY = "rejected"
_WILDCARD_REPLY = "#"
#Members of this enum are tuples
_JOB_ID_REQUIRED_INDEX = 1
_JOB_OPERATION_INDEX = 2
_STATUS_KEY = 'status'
_STATUS_DETAILS_KEY = 'statusDetails'
_EXPECTED_VERSION_KEY = 'expectedVersion'
_EXEXCUTION_NUMBER_KEY = 'executionNumber'
_INCLUDE_JOB_EXECUTION_STATE_KEY = 'includeJobExecutionState'
_INCLUDE_JOB_DOCUMENT_KEY = 'includeJobDocument'
_CLIENT_TOKEN_KEY = 'clientToken'
_STEP_TIMEOUT_IN_MINUTES_KEY = 'stepTimeoutInMinutes'
#The type of job topic.
class jobExecutionTopicType(object):
JOB_UNRECOGNIZED_TOPIC = (0, False, '')
JOB_GET_PENDING_TOPIC = (1, False, _GET_OPERATION)
JOB_START_NEXT_TOPIC = (2, False, _START_NEXT_OPERATION)
JOB_DESCRIBE_TOPIC = (3, True, _GET_OPERATION)
JOB_UPDATE_TOPIC = (4, True, _UPDATE_OPERATION)
JOB_NOTIFY_TOPIC = (5, False, _NOTIFY_OPERATION)
JOB_NOTIFY_NEXT_TOPIC = (6, False, _NOTIFY_NEXT_OPERATION)
JOB_WILDCARD_TOPIC = (7, False, _WILDCARD_OPERATION)
#Members of this enum are tuples
_JOB_SUFFIX_INDEX = 1
#The type of reply topic, or #JOB_REQUEST_TYPE for topics that are not replies.
class jobExecutionTopicReplyType(object):
JOB_UNRECOGNIZED_TOPIC_TYPE = (0, '')
JOB_REQUEST_TYPE = (1, '')
JOB_ACCEPTED_REPLY_TYPE = (2, '/' + _ACCEPTED_REPLY)
JOB_REJECTED_REPLY_TYPE = (3, '/' + _REJECTED_REPLY)
JOB_WILDCARD_REPLY_TYPE = (4, '/' + _WILDCARD_REPLY)
_JOB_STATUS_INDEX = 1
class jobExecutionStatus(object):
JOB_EXECUTION_STATUS_NOT_SET = (0, None)
JOB_EXECUTION_QUEUED = (1, 'QUEUED')
JOB_EXECUTION_IN_PROGRESS = (2, 'IN_PROGRESS')
JOB_EXECUTION_FAILED = (3, 'FAILED')
JOB_EXECUTION_SUCCEEDED = (4, 'SUCCEEDED')
JOB_EXECUTION_CANCELED = (5, 'CANCELED')
JOB_EXECUTION_REJECTED = (6, 'REJECTED')
JOB_EXECUTION_UNKNOWN_STATUS = (99, None)
def _getExecutionStatus(jobStatus):
try:
return jobStatus[_JOB_STATUS_INDEX]
except KeyError:
return None
def _isWithoutJobIdTopicType(srcJobExecTopicType):
return (srcJobExecTopicType == jobExecutionTopicType.JOB_GET_PENDING_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_START_NEXT_TOPIC
or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC)
class thingJobManager:
def __init__(self, thingName, clientToken = None):
self._thingName = thingName
self._clientToken = clientToken
def getJobTopic(self, srcJobExecTopicType, srcJobExecTopicReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None):
if self._thingName is None:
return None
#Verify topics that only support request type, actually have request type specified for reply
if (srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) and srcJobExecTopicReplyType != jobExecutionTopicReplyType.JOB_REQUEST_TYPE:
return None
#Verify topics that explicitly do not want a job ID do not have one specified
if (jobId is not None and _isWithoutJobIdTopicType(srcJobExecTopicType)):
return None
#Verify job ID is present if the topic requires one
if jobId is None and srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]:
return None
#Ensure the job operation is a non-empty string
if srcJobExecTopicType[_JOB_OPERATION_INDEX] == '':
return None
if srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]:
return '{0}{1}/jobs/{2}/{3}{4}'.format(_BASE_THINGS_TOPIC, self._thingName, str(jobId), srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX])
elif srcJobExecTopicType == jobExecutionTopicType.JOB_WILDCARD_TOPIC:
return '{0}{1}/jobs/#'.format(_BASE_THINGS_TOPIC, self._thingName)
else:
return '{0}{1}/jobs/{2}{3}'.format(_BASE_THINGS_TOPIC, self._thingName, srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX])
def serializeJobExecutionUpdatePayload(self, status, statusDetails=None, expectedVersion=0, executionNumber=0, includeJobExecutionState=False, includeJobDocument=False, stepTimeoutInMinutes=None):
executionStatus = _getExecutionStatus(status)
if executionStatus is None:
return None
payload = {_STATUS_KEY: executionStatus}
if statusDetails:
payload[_STATUS_DETAILS_KEY] = statusDetails
if expectedVersion > 0:
payload[_EXPECTED_VERSION_KEY] = str(expectedVersion)
if executionNumber > 0:
payload[_EXEXCUTION_NUMBER_KEY] = str(executionNumber)
if includeJobExecutionState:
payload[_INCLUDE_JOB_EXECUTION_STATE_KEY] = True
if includeJobDocument:
payload[_INCLUDE_JOB_DOCUMENT_KEY] = True
if self._clientToken is not None:
payload[_CLIENT_TOKEN_KEY] = self._clientToken
if stepTimeoutInMinutes is not None:
payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes
return json.dumps(payload)
def serializeDescribeJobExecutionPayload(self, executionNumber=0, includeJobDocument=True):
payload = {_INCLUDE_JOB_DOCUMENT_KEY: includeJobDocument}
if executionNumber > 0:
payload[_EXEXCUTION_NUMBER_KEY] = executionNumber
if self._clientToken is not None:
payload[_CLIENT_TOKEN_KEY] = self._clientToken
return json.dumps(payload)
def serializeStartNextPendingJobExecutionPayload(self, statusDetails=None, stepTimeoutInMinutes=None):
payload = {}
if self._clientToken is not None:
payload[_CLIENT_TOKEN_KEY] = self._clientToken
if statusDetails is not None:
payload[_STATUS_DETAILS_KEY] = statusDetails
if stepTimeoutInMinutes is not None:
payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes
return json.dumps(payload)
def serializeClientTokenPayload(self):
return json.dumps({_CLIENT_TOKEN_KEY: self._clientToken}) if self._clientToken is not None else '{}'

View File

@@ -0,0 +1,63 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
try:
import ssl
except:
ssl = None
class SSLContextBuilder(object):
def __init__(self):
self.check_supportability()
self._ssl_context = ssl.create_default_context()
def check_supportability(self):
if ssl is None:
raise RuntimeError("This platform has no SSL/TLS.")
if not hasattr(ssl, "SSLContext"):
raise NotImplementedError("This platform does not support SSLContext. Python 2.7.10+/3.5+ is required.")
if not hasattr(ssl.SSLContext, "set_alpn_protocols"):
raise NotImplementedError("This platform does not support ALPN as TLS extensions. Python 2.7.10+/3.5+ is required.")
def with_ca_certs(self, ca_certs):
self._ssl_context.load_verify_locations(ca_certs)
return self
def with_cert_key_pair(self, cert_file, key_file):
self._ssl_context.load_cert_chain(cert_file, key_file)
return self
def with_cert_reqs(self, cert_reqs):
self._ssl_context.verify_mode = cert_reqs
return self
def with_check_hostname(self, check_hostname):
self._ssl_context.check_hostname = check_hostname
return self
def with_ciphers(self, ciphers):
if ciphers is not None:
self._ssl_context.set_ciphers(ciphers) # set_ciphers() does not allow None input. Use default (do nothing) if None
return self
def with_alpn_protocols(self, alpn_protocols):
self._ssl_context.set_alpn_protocols(alpn_protocols)
return self
def build(self):
return self._ssl_context

View File

@@ -0,0 +1,699 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
# This class implements the progressive backoff logic for auto-reconnect.
# It manages the reconnect wait time for the current reconnect, controling
# when to increase it and when to reset it.
import re
import sys
import ssl
import errno
import struct
import socket
import base64
import time
import threading
import logging
import os
from datetime import datetime
import hashlib
import hmac
from AWSIoTPythonSDK.exception.AWSIoTExceptions import ClientError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssNoKeyInEnvironmentError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssHandShakeError
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
try:
from urllib.parse import quote # Python 3+
except ImportError:
from urllib import quote
# INI config file handling
try:
from configparser import ConfigParser # Python 3+
from configparser import NoOptionError
from configparser import NoSectionError
except ImportError:
from ConfigParser import ConfigParser
from ConfigParser import NoOptionError
from ConfigParser import NoSectionError
class ProgressiveBackOffCore:
# Logger
_logger = logging.getLogger(__name__)
def __init__(self, srcBaseReconnectTimeSecond=1, srcMaximumReconnectTimeSecond=32, srcMinimumConnectTimeSecond=20):
# The base reconnection time in seconds, default 1
self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond
# The maximum reconnection time in seconds, default 32
self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond
# The minimum time in milliseconds that a connection must be maintained in order to be considered stable
# Default 20
self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond
# Current backOff time in seconds, init to equal to 0
self._currentBackoffTimeSecond = 1
# Handler for timer
self._resetBackoffTimer = None
# For custom progressiveBackoff timing configuration
def configTime(self, srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond):
if srcBaseReconnectTimeSecond < 0 or srcMaximumReconnectTimeSecond < 0 or srcMinimumConnectTimeSecond < 0:
self._logger.error("init: Negative time configuration detected.")
raise ValueError("Negative time configuration detected.")
if srcBaseReconnectTimeSecond >= srcMinimumConnectTimeSecond:
self._logger.error("init: Min connect time should be bigger than base reconnect time.")
raise ValueError("Min connect time should be bigger than base reconnect time.")
self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond
self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond
self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond
self._currentBackoffTimeSecond = 1
# Block the reconnect logic for _currentBackoffTimeSecond
# Update the currentBackoffTimeSecond for the next reconnect
# Cancel the in-waiting timer for resetting backOff time
# This should get called only when a disconnect/reconnect happens
def backOff(self):
self._logger.debug("backOff: current backoff time is: " + str(self._currentBackoffTimeSecond) + " sec.")
if self._resetBackoffTimer is not None:
# Cancel the timer
self._resetBackoffTimer.cancel()
# Block the reconnect logic
time.sleep(self._currentBackoffTimeSecond)
# Update the backoff time
if self._currentBackoffTimeSecond == 0:
# This is the first attempt to connect, set it to base
self._currentBackoffTimeSecond = self._baseReconnectTimeSecond
else:
# r_cur = min(2^n*r_base, r_max)
self._currentBackoffTimeSecond = min(self._maximumReconnectTimeSecond, self._currentBackoffTimeSecond * 2)
# Start the timer for resetting _currentBackoffTimeSecond
# Will be cancelled upon calling backOff
def startStableConnectionTimer(self):
self._resetBackoffTimer = threading.Timer(self._minimumConnectTimeSecond,
self._connectionStableThenResetBackoffTime)
self._resetBackoffTimer.start()
def stopStableConnectionTimer(self):
if self._resetBackoffTimer is not None:
# Cancel the timer
self._resetBackoffTimer.cancel()
# Timer callback to reset _currentBackoffTimeSecond
# If the connection is stable for longer than _minimumConnectTimeSecond,
# reset the currentBackoffTimeSecond to _baseReconnectTimeSecond
def _connectionStableThenResetBackoffTime(self):
self._logger.debug(
"stableConnection: Resetting the backoff time to: " + str(self._baseReconnectTimeSecond) + " sec.")
self._currentBackoffTimeSecond = self._baseReconnectTimeSecond
class SigV4Core:
_logger = logging.getLogger(__name__)
def __init__(self):
self._aws_access_key_id = ""
self._aws_secret_access_key = ""
self._aws_session_token = ""
self._credentialConfigFilePath = "~/.aws/credentials"
def setIAMCredentials(self, srcAWSAccessKeyID, srcAWSSecretAccessKey, srcAWSSessionToken):
self._aws_access_key_id = srcAWSAccessKeyID
self._aws_secret_access_key = srcAWSSecretAccessKey
self._aws_session_token = srcAWSSessionToken
def _createAmazonDate(self):
# Returned as a unicode string in Py3.x
amazonDate = []
currentTime = datetime.utcnow()
YMDHMS = currentTime.strftime('%Y%m%dT%H%M%SZ')
YMD = YMDHMS[0:YMDHMS.index('T')]
amazonDate.append(YMD)
amazonDate.append(YMDHMS)
return amazonDate
def _sign(self, key, message):
# Returned as a utf-8 byte string in Py3.x
return hmac.new(key, message.encode('utf-8'), hashlib.sha256).digest()
def _getSignatureKey(self, key, dateStamp, regionName, serviceName):
# Returned as a utf-8 byte string in Py3.x
kDate = self._sign(('AWS4' + key).encode('utf-8'), dateStamp)
kRegion = self._sign(kDate, regionName)
kService = self._sign(kRegion, serviceName)
kSigning = self._sign(kService, 'aws4_request')
return kSigning
def _checkIAMCredentials(self):
# Check custom config
ret = self._checkKeyInCustomConfig()
# Check environment variables
if not ret:
ret = self._checkKeyInEnv()
# Check files
if not ret:
ret = self._checkKeyInFiles()
# All credentials returned as unicode strings in Py3.x
return ret
def _checkKeyInEnv(self):
ret = dict()
self._aws_access_key_id = os.environ.get('AWS_ACCESS_KEY_ID')
self._aws_secret_access_key = os.environ.get('AWS_SECRET_ACCESS_KEY')
self._aws_session_token = os.environ.get('AWS_SESSION_TOKEN')
if self._aws_access_key_id is not None and self._aws_secret_access_key is not None:
ret["aws_access_key_id"] = self._aws_access_key_id
ret["aws_secret_access_key"] = self._aws_secret_access_key
# We do not necessarily need session token...
if self._aws_session_token is not None:
ret["aws_session_token"] = self._aws_session_token
self._logger.debug("IAM credentials from env var.")
return ret
def _checkKeyInINIDefault(self, srcConfigParser, sectionName):
ret = dict()
# Check aws_access_key_id and aws_secret_access_key
try:
ret["aws_access_key_id"] = srcConfigParser.get(sectionName, "aws_access_key_id")
ret["aws_secret_access_key"] = srcConfigParser.get(sectionName, "aws_secret_access_key")
except NoOptionError:
self._logger.warn("Cannot find IAM keyID/secretKey in credential file.")
# We do not continue searching if we cannot even get IAM id/secret right
if len(ret) == 2:
# Check aws_session_token, optional
try:
ret["aws_session_token"] = srcConfigParser.get(sectionName, "aws_session_token")
except NoOptionError:
self._logger.debug("No AWS Session Token found.")
return ret
def _checkKeyInFiles(self):
credentialFile = None
credentialConfig = None
ret = dict()
# Should be compatible with aws cli default credential configuration
# *NIX/Windows
try:
# See if we get the file
credentialConfig = ConfigParser()
credentialFilePath = os.path.expanduser(self._credentialConfigFilePath) # Is it compatible with windows? \/
credentialConfig.read(credentialFilePath)
# Now we have the file, start looking for credentials...
# 'default' section
ret = self._checkKeyInINIDefault(credentialConfig, "default")
if not ret:
# 'DEFAULT' section
ret = self._checkKeyInINIDefault(credentialConfig, "DEFAULT")
self._logger.debug("IAM credentials from file.")
except IOError:
self._logger.debug("No IAM credential configuration file in " + credentialFilePath)
except NoSectionError:
self._logger.error("Cannot find IAM 'default' section.")
return ret
def _checkKeyInCustomConfig(self):
ret = dict()
if self._aws_access_key_id != "" and self._aws_secret_access_key != "":
ret["aws_access_key_id"] = self._aws_access_key_id
ret["aws_secret_access_key"] = self._aws_secret_access_key
# We do not necessarily need session token...
if self._aws_session_token != "":
ret["aws_session_token"] = self._aws_session_token
self._logger.debug("IAM credentials from custom config.")
return ret
def createWebsocketEndpoint(self, host, port, region, method, awsServiceName, path):
# Return the endpoint as unicode string in 3.x
# Gather all the facts
amazonDate = self._createAmazonDate()
amazonDateSimple = amazonDate[0] # Unicode in 3.x
amazonDateComplex = amazonDate[1] # Unicode in 3.x
allKeys = self._checkIAMCredentials() # Unicode in 3.x
if not self._hasCredentialsNecessaryForWebsocket(allKeys):
raise wssNoKeyInEnvironmentError()
else:
# Because of self._hasCredentialsNecessaryForWebsocket(...), keyID and secretKey should not be None from here
keyID = allKeys["aws_access_key_id"]
secretKey = allKeys["aws_secret_access_key"]
# amazonDateSimple and amazonDateComplex are guaranteed not to be None
queryParameters = "X-Amz-Algorithm=AWS4-HMAC-SHA256" + \
"&X-Amz-Credential=" + keyID + "%2F" + amazonDateSimple + "%2F" + region + "%2F" + awsServiceName + "%2Faws4_request" + \
"&X-Amz-Date=" + amazonDateComplex + \
"&X-Amz-Expires=86400" + \
"&X-Amz-SignedHeaders=host" # Unicode in 3.x
hashedPayload = hashlib.sha256(str("").encode('utf-8')).hexdigest() # Unicode in 3.x
# Create the string to sign
signedHeaders = "host"
canonicalHeaders = "host:" + host + "\n"
canonicalRequest = method + "\n" + path + "\n" + queryParameters + "\n" + canonicalHeaders + "\n" + signedHeaders + "\n" + hashedPayload # Unicode in 3.x
hashedCanonicalRequest = hashlib.sha256(str(canonicalRequest).encode('utf-8')).hexdigest() # Unicoede in 3.x
stringToSign = "AWS4-HMAC-SHA256\n" + amazonDateComplex + "\n" + amazonDateSimple + "/" + region + "/" + awsServiceName + "/aws4_request\n" + hashedCanonicalRequest # Unicode in 3.x
# Sign it
signingKey = self._getSignatureKey(secretKey, amazonDateSimple, region, awsServiceName)
signature = hmac.new(signingKey, (stringToSign).encode("utf-8"), hashlib.sha256).hexdigest()
# generate url
url = "wss://" + host + ":" + str(port) + path + '?' + queryParameters + "&X-Amz-Signature=" + signature
# See if we have STS token, if we do, add it
awsSessionTokenCandidate = allKeys.get("aws_session_token")
if awsSessionTokenCandidate is not None and len(awsSessionTokenCandidate) != 0:
aws_session_token = allKeys["aws_session_token"]
url += "&X-Amz-Security-Token=" + quote(aws_session_token.encode("utf-8")) # Unicode in 3.x
self._logger.debug("createWebsocketEndpoint: Websocket URL: " + url)
return url
def _hasCredentialsNecessaryForWebsocket(self, allKeys):
awsAccessKeyIdCandidate = allKeys.get("aws_access_key_id")
awsSecretAccessKeyCandidate = allKeys.get("aws_secret_access_key")
# None value is NOT considered as valid entries
validEntries = awsAccessKeyIdCandidate is not None and awsAccessKeyIdCandidate is not None
if validEntries:
# Empty value is NOT considered as valid entries
validEntries &= (len(awsAccessKeyIdCandidate) != 0 and len(awsSecretAccessKeyCandidate) != 0)
return validEntries
# This is an internal class that buffers the incoming bytes into an
# internal buffer until it gets the full desired length of bytes.
# At that time, this bufferedReader will be reset.
# *Error handling:
# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN),
# leave them to the paho _packet_read for further handling (ignored and try
# again when data is available.
# For other errors, leave them to the paho _packet_read for error reporting.
class _BufferedReader:
_sslSocket = None
_internalBuffer = None
_remainedLength = -1
_bufferingInProgress = False
def __init__(self, sslSocket):
self._sslSocket = sslSocket
self._internalBuffer = bytearray()
self._bufferingInProgress = False
def _reset(self):
self._internalBuffer = bytearray()
self._remainedLength = -1
self._bufferingInProgress = False
def read(self, numberOfBytesToBeBuffered):
if not self._bufferingInProgress: # If last read is completed...
self._remainedLength = numberOfBytesToBeBuffered
self._bufferingInProgress = True # Now we start buffering a new length of bytes
while self._remainedLength > 0: # Read in a loop, always try to read in the remained length
# If the data is temporarily not available, socket.error will be raised and catched by paho
dataChunk = self._sslSocket.read(self._remainedLength)
# There is a chance where the server terminates the connection without closing the socket.
# If that happens, let's raise an exception and enter the reconnect flow.
if not dataChunk:
raise socket.error(errno.ECONNABORTED, 0)
self._internalBuffer.extend(dataChunk) # Buffer the data
self._remainedLength -= len(dataChunk) # Update the remained length
# The requested length of bytes is buffered, recover the context and return it
# Otherwise error should be raised
ret = self._internalBuffer
self._reset()
return ret # This should always be bytearray
# This is the internal class that sends requested data out chunk by chunk according
# to the availablity of the socket write operation. If the requested bytes of data
# (after encoding) needs to be sent out in separate socket write operations (most
# probably be interrupted by the error socket.error (errno = ssl.SSL_ERROR_WANT_WRITE).)
# , the write pointer is stored to ensure that the continued bytes will be sent next
# time this function gets called.
# *Error handling:
# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN),
# leave them to the paho _packet_read for further handling (ignored and try
# again when data is available.
# For other errors, leave them to the paho _packet_read for error reporting.
class _BufferedWriter:
_sslSocket = None
_internalBuffer = None
_writingInProgress = False
_requestedDataLength = -1
def __init__(self, sslSocket):
self._sslSocket = sslSocket
self._internalBuffer = bytearray()
self._writingInProgress = False
self._requestedDataLength = -1
def _reset(self):
self._internalBuffer = bytearray()
self._writingInProgress = False
self._requestedDataLength = -1
# Input data for this function needs to be an encoded wss frame
# Always request for packet[pos=0:] (raw MQTT data)
def write(self, encodedData, payloadLength):
# encodedData should always be bytearray
# Check if we have a frame that is partially sent
if not self._writingInProgress:
self._internalBuffer = encodedData
self._writingInProgress = True
self._requestedDataLength = payloadLength
# Now, write as much as we can
lengthWritten = self._sslSocket.write(self._internalBuffer)
self._internalBuffer = self._internalBuffer[lengthWritten:]
# This MQTT packet has been sent out in a wss frame, completely
if len(self._internalBuffer) == 0:
ret = self._requestedDataLength
self._reset()
return ret
# This socket write is half-baked...
else:
return 0 # Ensure that the 'pos' inside the MQTT packet never moves since we have not finished the transmission of this encoded frame
class SecuredWebSocketCore:
# Websocket Constants
_OP_CONTINUATION = 0x0
_OP_TEXT = 0x1
_OP_BINARY = 0x2
_OP_CONNECTION_CLOSE = 0x8
_OP_PING = 0x9
_OP_PONG = 0xa
# Websocket Connect Status
_WebsocketConnectInit = -1
_WebsocketDisconnected = 1
_logger = logging.getLogger(__name__)
def __init__(self, socket, hostAddress, portNumber, AWSAccessKeyID="", AWSSecretAccessKey="", AWSSessionToken=""):
self._connectStatus = self._WebsocketConnectInit
# Handlers
self._sslSocket = socket
self._sigV4Handler = self._createSigV4Core()
self._sigV4Handler.setIAMCredentials(AWSAccessKeyID, AWSSecretAccessKey, AWSSessionToken)
# Endpoint Info
self._hostAddress = hostAddress
self._portNumber = portNumber
# Section Flags
self._hasOpByte = False
self._hasPayloadLengthFirst = False
self._hasPayloadLengthExtended = False
self._hasMaskKey = False
self._hasPayload = False
# Properties for current websocket frame
self._isFIN = False
self._RSVBits = None
self._opCode = None
self._needMaskKey = False
self._payloadLengthBytesLength = 1
self._payloadLength = 0
self._maskKey = None
self._payloadDataBuffer = bytearray() # Once the whole wss connection is lost, there is no need to keep the buffered payload
try:
self._handShake(hostAddress, portNumber)
except wssNoKeyInEnvironmentError: # Handle SigV4 signing and websocket handshaking errors
raise ValueError("No Access Key/KeyID Error")
except wssHandShakeError:
raise ValueError("Websocket Handshake Error")
except ClientError as e:
raise ValueError(e.message)
# Now we have a socket with secured websocket...
self._bufferedReader = _BufferedReader(self._sslSocket)
self._bufferedWriter = _BufferedWriter(self._sslSocket)
def _createSigV4Core(self):
return SigV4Core()
def _generateMaskKey(self):
return bytearray(os.urandom(4))
# os.urandom returns ascii str in 2.x, converted to bytearray
# os.urandom returns bytes in 3.x, converted to bytearray
def _reset(self): # Reset the context for wss frame reception
# Control info
self._hasOpByte = False
self._hasPayloadLengthFirst = False
self._hasPayloadLengthExtended = False
self._hasMaskKey = False
self._hasPayload = False
# Frame Info
self._isFIN = False
self._RSVBits = None
self._opCode = None
self._needMaskKey = False
self._payloadLengthBytesLength = 1
self._payloadLength = 0
self._maskKey = None
# Never reset the payloadData since we might have fragmented MQTT data from the pervious frame
def _generateWSSKey(self):
return base64.b64encode(os.urandom(128)) # Bytes
def _verifyWSSResponse(self, response, clientKey):
# Check if it is a 101 response
rawResponse = response.strip().lower()
if b"101 switching protocols" not in rawResponse or b"upgrade: websocket" not in rawResponse or b"connection: upgrade" not in rawResponse:
return False
# Parse out the sec-websocket-accept
WSSAcceptKeyIndex = response.strip().index(b"sec-websocket-accept: ") + len(b"sec-websocket-accept: ")
rawSecWebSocketAccept = response.strip()[WSSAcceptKeyIndex:].split(b"\r\n")[0].strip()
# Verify the WSSAcceptKey
return self._verifyWSSAcceptKey(rawSecWebSocketAccept, clientKey)
def _verifyWSSAcceptKey(self, srcAcceptKey, clientKey):
GUID = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
verifyServerAcceptKey = base64.b64encode((hashlib.sha1(clientKey + GUID)).digest()) # Bytes
return srcAcceptKey == verifyServerAcceptKey
def _handShake(self, hostAddress, portNumber):
CRLF = "\r\n"
IOT_ENDPOINT_PATTERN = r"^[0-9a-zA-Z]+(\.ats|-ats)?\.iot\.(.*)\.amazonaws\..*"
matched = re.compile(IOT_ENDPOINT_PATTERN, re.IGNORECASE).match(hostAddress)
if not matched:
raise ClientError("Invalid endpoint pattern for wss: %s" % hostAddress)
region = matched.group(2)
signedURL = self._sigV4Handler.createWebsocketEndpoint(hostAddress, portNumber, region, "GET", "iotdata", "/mqtt")
# Now we got a signedURL
path = signedURL[signedURL.index("/mqtt"):]
# Assemble HTTP request headers
Method = "GET " + path + " HTTP/1.1" + CRLF
Host = "Host: " + hostAddress + CRLF
Connection = "Connection: " + "Upgrade" + CRLF
Upgrade = "Upgrade: " + "websocket" + CRLF
secWebSocketVersion = "Sec-WebSocket-Version: " + "13" + CRLF
rawSecWebSocketKey = self._generateWSSKey() # Bytes
secWebSocketKey = "sec-websocket-key: " + rawSecWebSocketKey.decode('utf-8') + CRLF # Should be randomly generated...
secWebSocketProtocol = "Sec-WebSocket-Protocol: " + "mqttv3.1" + CRLF
secWebSocketExtensions = "Sec-WebSocket-Extensions: " + "permessage-deflate; client_max_window_bits" + CRLF
# Send the HTTP request
# Ensure that we are sending bytes, not by any chance unicode string
handshakeBytes = Method + Host + Connection + Upgrade + secWebSocketVersion + secWebSocketProtocol + secWebSocketExtensions + secWebSocketKey + CRLF
handshakeBytes = handshakeBytes.encode('utf-8')
self._sslSocket.write(handshakeBytes)
# Read it back (Non-blocking socket)
timeStart = time.time()
wssHandshakeResponse = bytearray()
while len(wssHandshakeResponse) == 0:
try:
wssHandshakeResponse += self._sslSocket.read(1024) # Response is always less than 1024 bytes
except socket.error as err:
if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE:
if time.time() - timeStart > self._getTimeoutSec():
raise err # We make sure that reconnect gets retried in Paho upon a wss reconnect response timeout
else:
raise err
# Verify response
# Now both wssHandshakeResponse and rawSecWebSocketKey are byte strings
if not self._verifyWSSResponse(wssHandshakeResponse, rawSecWebSocketKey):
raise wssHandShakeError()
else:
pass
def _getTimeoutSec(self):
return DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
# Used to create a single wss frame
# Assume that the maximum length of a MQTT packet never exceeds the maximum length
# for a wss frame. Therefore, the FIN bit for the encoded frame will always be 1.
# Frames are encoded as BINARY frames.
def _encodeFrame(self, rawPayload, opCode, masked=1):
ret = bytearray()
# Op byte
opByte = 0x80 | opCode # Always a FIN, no RSV bits
ret.append(opByte)
# Payload Length bytes
maskBit = masked
payloadLength = len(rawPayload)
if payloadLength <= 125:
ret.append((maskBit << 7) | payloadLength)
elif payloadLength <= 0xffff: # 16-bit unsigned int
ret.append((maskBit << 7) | 126)
ret.extend(struct.pack("!H", payloadLength))
elif payloadLength <= 0x7fffffffffffffff: # 64-bit unsigned int (most significant bit must be 0)
ret.append((maskBit << 7) | 127)
ret.extend(struct.pack("!Q", payloadLength))
else: # Overflow
raise ValueError("Exceeds the maximum number of bytes for a single websocket frame.")
if maskBit == 1:
# Mask key bytes
maskKey = self._generateMaskKey()
ret.extend(maskKey)
# Mask the payload
payloadBytes = bytearray(rawPayload)
if maskBit == 1:
for i in range(0, payloadLength):
payloadBytes[i] ^= maskKey[i % 4]
ret.extend(payloadBytes)
# Return the assembled wss frame
return ret
# Used for the wss client to close a wss connection
# Create and send a masked wss closing frame
def _closeWssConnection(self):
# Frames sent from client to server must be masked
self._sslSocket.write(self._encodeFrame(b"", self._OP_CONNECTION_CLOSE, masked=1))
# Used for the wss client to respond to a wss PING from server
# Create and send a masked PONG frame
def _sendPONG(self):
# Frames sent from client to server must be masked
self._sslSocket.write(self._encodeFrame(b"", self._OP_PONG, masked=1))
# Override sslSocket read. Always read from the wss internal payload buffer, which
# contains the masked MQTT packet. This read will decode ONE wss frame every time
# and load in the payload for MQTT _packet_read. At any time, MQTT _packet_read
# should be able to read a complete MQTT packet from the payload (buffered per wss
# frame payload). If the MQTT packet is break into separate wss frames, different
# chunks will be buffered in separate frames and MQTT _packet_read will not be able
# to collect a complete MQTT packet to operate on until the necessary payload is
# fully buffered.
# If the requested number of bytes are not available, SSL_ERROR_WANT_READ will be
# raised to trigger another call of _packet_read when the data is available again.
def read(self, numberOfBytes):
# Check if we have enough data for paho
# _payloadDataBuffer will not be empty ony when the payload of a new wss frame
# has been unmasked.
if len(self._payloadDataBuffer) >= numberOfBytes:
ret = self._payloadDataBuffer[0:numberOfBytes]
self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:]
# struct.unpack(fmt, string) # Py2.x
# struct.unpack(fmt, buffer) # Py3.x
# Here ret is always in bytes (buffer interface)
if sys.version_info[0] < 3: # Py2.x
ret = str(ret)
return ret
# Emmm, We don't. Try to buffer from the socket (It's a new wss frame).
if not self._hasOpByte: # Check if we need to buffer OpByte
opByte = self._bufferedReader.read(1)
self._isFIN = (opByte[0] & 0x80) == 0x80
self._RSVBits = (opByte[0] & 0x70)
self._opCode = (opByte[0] & 0x0f)
self._hasOpByte = True # Finished buffering opByte
# Check if any of the RSV bits are set, if so, close the connection
# since client never sends negotiated extensions
if self._RSVBits != 0x0:
self._closeWssConnection()
self._connectStatus = self._WebsocketDisconnected
self._payloadDataBuffer = bytearray()
raise socket.error(ssl.SSL_ERROR_WANT_READ, "RSV bits set with NO negotiated extensions.")
if not self._hasPayloadLengthFirst: # Check if we need to buffer First Payload Length byte
payloadLengthFirst = self._bufferedReader.read(1)
self._hasPayloadLengthFirst = True # Finished buffering first byte of payload length
self._needMaskKey = (payloadLengthFirst[0] & 0x80) == 0x80
payloadLengthFirstByteArray = bytearray()
payloadLengthFirstByteArray.extend(payloadLengthFirst)
self._payloadLength = (payloadLengthFirstByteArray[0] & 0x7f)
if self._payloadLength == 126:
self._payloadLengthBytesLength = 2
self._hasPayloadLengthExtended = False # Force to buffer the extended
elif self._payloadLength == 127:
self._payloadLengthBytesLength = 8
self._hasPayloadLengthExtended = False # Force to buffer the extended
else: # _payloadLength <= 125:
self._hasPayloadLengthExtended = True # No need to buffer extended payload length
if not self._hasPayloadLengthExtended: # Check if we need to buffer Extended Payload Length bytes
payloadLengthExtended = self._bufferedReader.read(self._payloadLengthBytesLength)
self._hasPayloadLengthExtended = True
if sys.version_info[0] < 3:
payloadLengthExtended = str(payloadLengthExtended)
if self._payloadLengthBytesLength == 2:
self._payloadLength = struct.unpack("!H", payloadLengthExtended)[0]
else: # _payloadLengthBytesLength == 8
self._payloadLength = struct.unpack("!Q", payloadLengthExtended)[0]
if self._needMaskKey: # Response from server is masked, close the connection
self._closeWssConnection()
self._connectStatus = self._WebsocketDisconnected
self._payloadDataBuffer = bytearray()
raise socket.error(ssl.SSL_ERROR_WANT_READ, "Server response masked, closing connection and try again.")
if not self._hasPayload: # Check if we need to buffer the payload
payloadForThisFrame = self._bufferedReader.read(self._payloadLength)
self._hasPayload = True
# Client side should never received a masked packet from the server side
# Unmask it as needed
#if self._needMaskKey:
# for i in range(0, self._payloadLength):
# payloadForThisFrame[i] ^= self._maskKey[i % 4]
# Append it to the internal payload buffer
self._payloadDataBuffer.extend(payloadForThisFrame)
# Now we have the complete wss frame, reset the context
# Check to see if it is a wss closing frame
if self._opCode == self._OP_CONNECTION_CLOSE:
self._connectStatus = self._WebsocketDisconnected
self._payloadDataBuffer = bytearray() # Ensure that once the wss closing frame comes, we have nothing to read and start all over again
# Check to see if it is a wss PING frame
if self._opCode == self._OP_PING:
self._sendPONG() # Nothing more to do here, if the transmission of the last wssMQTT packet is not finished, it will continue
self._reset()
# Check again if we have enough data for paho
if len(self._payloadDataBuffer) >= numberOfBytes:
ret = self._payloadDataBuffer[0:numberOfBytes]
self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:]
# struct.unpack(fmt, string) # Py2.x
# struct.unpack(fmt, buffer) # Py3.x
# Here ret is always in bytes (buffer interface)
if sys.version_info[0] < 3: # Py2.x
ret = str(ret)
return ret
else: # Fragmented MQTT packets in separate wss frames
raise socket.error(ssl.SSL_ERROR_WANT_READ, "Not a complete MQTT packet payload within this wss frame.")
def write(self, bytesToBeSent):
# When there is a disconnection, select will report a TypeError which triggers the reconnect.
# In reconnect, Paho will set the socket object (mocked by wss) to None, blocking other ops
# before a connection is re-established.
# This 'low-level' socket write op should always be able to write to plain socket.
# Error reporting is performed by Python socket itself.
# Wss closing frame handling is performed in the wss read.
return self._bufferedWriter.write(self._encodeFrame(bytesToBeSent, self._OP_BINARY, 1), len(bytesToBeSent))
def close(self):
if self._sslSocket is not None:
self._sslSocket.close()
self._sslSocket = None
def getpeercert(self):
return self._sslSocket.getpeercert()
def getSSLSocket(self):
if self._connectStatus != self._WebsocketDisconnected:
return self._sslSocket
else:
return None # Leave the sslSocket to Paho to close it. (_ssl.close() -> wssCore.close())

View File

@@ -0,0 +1,244 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import ssl
import logging
from threading import Lock
from numbers import Number
import AWSIoTPythonSDK.core.protocol.paho.client as mqtt
from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS
from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids
class ClientStatus(object):
IDLE = 0
CONNECT = 1
RESUBSCRIBE = 2
DRAINING = 3
STABLE = 4
USER_DISCONNECT = 5
ABNORMAL_DISCONNECT = 6
class ClientStatusContainer(object):
def __init__(self):
self._status = ClientStatus.IDLE
def get_status(self):
return self._status
def set_status(self, status):
if ClientStatus.USER_DISCONNECT == self._status: # If user requests to disconnect, no status updates other than user connect
if ClientStatus.CONNECT == status:
self._status = status
else:
self._status = status
class InternalAsyncMqttClient(object):
_logger = logging.getLogger(__name__)
def __init__(self, client_id, clean_session, protocol, use_wss):
self._paho_client = self._create_paho_client(client_id, clean_session, None, protocol, use_wss)
self._use_wss = use_wss
self._event_callback_map_lock = Lock()
self._event_callback_map = dict()
def _create_paho_client(self, client_id, clean_session, user_data, protocol, use_wss):
self._logger.debug("Initializing MQTT layer...")
return mqtt.Client(client_id, clean_session, user_data, protocol, use_wss)
# TODO: Merge credentials providers configuration into one
def set_cert_credentials_provider(self, cert_credentials_provider):
# History issue from Yun SDK where AR9331 embedded Linux only have Python 2.7.3
# pre-installed. In this version, TLSv1_2 is not even an option.
# SSLv23 is a work-around which selects the highest TLS version between the client
# and service. If user installs opensslv1.0.1+, this option will work fine for Mutual
# Auth.
# Note that we cannot force TLSv1.2 for Mutual Auth. in Python 2.7.3 and TLS support
# in Python only starts from Python2.7.
# See also: https://docs.python.org/2/library/ssl.html#ssl.PROTOCOL_SSLv23
if self._use_wss:
ca_path = cert_credentials_provider.get_ca_path()
self._paho_client.tls_set(ca_certs=ca_path, cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23)
else:
ca_path = cert_credentials_provider.get_ca_path()
cert_path = cert_credentials_provider.get_cert_path()
key_path = cert_credentials_provider.get_key_path()
self._paho_client.tls_set(ca_certs=ca_path,certfile=cert_path, keyfile=key_path,
cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23)
def set_iam_credentials_provider(self, iam_credentials_provider):
self._paho_client.configIAMCredentials(iam_credentials_provider.get_access_key_id(),
iam_credentials_provider.get_secret_access_key(),
iam_credentials_provider.get_session_token())
def set_endpoint_provider(self, endpoint_provider):
self._endpoint_provider = endpoint_provider
def configure_last_will(self, topic, payload, qos, retain=False):
self._paho_client.will_set(topic, payload, qos, retain)
def configure_alpn_protocols(self, alpn_protocols):
self._paho_client.config_alpn_protocols(alpn_protocols)
def clear_last_will(self):
self._paho_client.will_clear()
def set_username_password(self, username, password=None):
self._paho_client.username_pw_set(username, password)
def set_socket_factory(self, socket_factory):
self._paho_client.socket_factory_set(socket_factory)
def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec):
self._paho_client.setBackoffTiming(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec)
def connect(self, keep_alive_sec, ack_callback=None):
host = self._endpoint_provider.get_host()
port = self._endpoint_provider.get_port()
with self._event_callback_map_lock:
self._logger.debug("Filling in fixed event callbacks: CONNACK, DISCONNECT, MESSAGE")
self._event_callback_map[FixedEventMids.CONNACK_MID] = self._create_combined_on_connect_callback(ack_callback)
self._event_callback_map[FixedEventMids.DISCONNECT_MID] = self._create_combined_on_disconnect_callback(None)
self._event_callback_map[FixedEventMids.MESSAGE_MID] = self._create_converted_on_message_callback()
rc = self._paho_client.connect(host, port, keep_alive_sec)
if MQTT_ERR_SUCCESS == rc:
self.start_background_network_io()
return rc
def start_background_network_io(self):
self._logger.debug("Starting network I/O thread...")
self._paho_client.loop_start()
def stop_background_network_io(self):
self._logger.debug("Stopping network I/O thread...")
self._paho_client.loop_stop()
def disconnect(self, ack_callback=None):
with self._event_callback_map_lock:
rc = self._paho_client.disconnect()
if MQTT_ERR_SUCCESS == rc:
self._logger.debug("Filling in custom disconnect event callback...")
combined_on_disconnect_callback = self._create_combined_on_disconnect_callback(ack_callback)
self._event_callback_map[FixedEventMids.DISCONNECT_MID] = combined_on_disconnect_callback
return rc
def _create_combined_on_connect_callback(self, ack_callback):
def combined_on_connect_callback(mid, data):
self.on_online()
if ack_callback:
ack_callback(mid, data)
return combined_on_connect_callback
def _create_combined_on_disconnect_callback(self, ack_callback):
def combined_on_disconnect_callback(mid, data):
self.on_offline()
if ack_callback:
ack_callback(mid, data)
return combined_on_disconnect_callback
def _create_converted_on_message_callback(self):
def converted_on_message_callback(mid, data):
self.on_message(data)
return converted_on_message_callback
# For client online notification
def on_online(self):
pass
# For client offline notification
def on_offline(self):
pass
# For client message reception notification
def on_message(self, message):
pass
def publish(self, topic, payload, qos, retain=False, ack_callback=None):
with self._event_callback_map_lock:
rc, mid = self._paho_client.publish(topic, payload, qos, retain)
if MQTT_ERR_SUCCESS == rc and qos > 0 and ack_callback:
self._logger.debug("Filling in custom puback (QoS>0) event callback...")
self._event_callback_map[mid] = ack_callback
return rc, mid
def subscribe(self, topic, qos, ack_callback=None):
with self._event_callback_map_lock:
rc, mid = self._paho_client.subscribe(topic, qos)
if MQTT_ERR_SUCCESS == rc and ack_callback:
self._logger.debug("Filling in custom suback event callback...")
self._event_callback_map[mid] = ack_callback
return rc, mid
def unsubscribe(self, topic, ack_callback=None):
with self._event_callback_map_lock:
rc, mid = self._paho_client.unsubscribe(topic)
if MQTT_ERR_SUCCESS == rc and ack_callback:
self._logger.debug("Filling in custom unsuback event callback...")
self._event_callback_map[mid] = ack_callback
return rc, mid
def register_internal_event_callbacks(self, on_connect, on_disconnect, on_publish, on_subscribe, on_unsubscribe, on_message):
self._logger.debug("Registering internal event callbacks to MQTT layer...")
self._paho_client.on_connect = on_connect
self._paho_client.on_disconnect = on_disconnect
self._paho_client.on_publish = on_publish
self._paho_client.on_subscribe = on_subscribe
self._paho_client.on_unsubscribe = on_unsubscribe
self._paho_client.on_message = on_message
def unregister_internal_event_callbacks(self):
self._logger.debug("Unregistering internal event callbacks from MQTT layer...")
self._paho_client.on_connect = None
self._paho_client.on_disconnect = None
self._paho_client.on_publish = None
self._paho_client.on_subscribe = None
self._paho_client.on_unsubscribe = None
self._paho_client.on_message = None
def invoke_event_callback(self, mid, data=None):
with self._event_callback_map_lock:
event_callback = self._event_callback_map.get(mid)
# For invoking the event callback, we do not need to acquire the lock
if event_callback:
self._logger.debug("Invoking custom event callback...")
if data is not None:
event_callback(mid=mid, data=data)
else:
event_callback(mid=mid)
if isinstance(mid, Number): # Do NOT remove callbacks for CONNACK/DISCONNECT/MESSAGE
self._logger.debug("This custom event callback is for pub/sub/unsub, removing it after invocation...")
with self._event_callback_map_lock:
del self._event_callback_map[mid]
def remove_event_callback(self, mid):
with self._event_callback_map_lock:
if mid in self._event_callback_map:
self._logger.debug("Removing custom event callback...")
del self._event_callback_map[mid]
def clean_up_event_callbacks(self):
with self._event_callback_map_lock:
self._event_callback_map.clear()
def get_event_callback_map(self):
return self._event_callback_map

View File

@@ -0,0 +1,20 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC = 30
DEFAULT_OPERATION_TIMEOUT_SEC = 5
DEFAULT_DRAINING_INTERNAL_SEC = 0.5
METRICS_PREFIX = "?SDK=Python&Version="
ALPN_PROTCOLS = "x-amzn-mqtt-ca"

View File

@@ -0,0 +1,29 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class EventTypes(object):
CONNACK = 0
DISCONNECT = 1
PUBACK = 2
SUBACK = 3
UNSUBACK = 4
MESSAGE = 5
class FixedEventMids(object):
CONNACK_MID = "CONNECTED"
DISCONNECT_MID = "DISCONNECTED"
MESSAGE_MID = "MESSAGE"
QUEUED_MID = "QUEUED"

View File

@@ -0,0 +1,87 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import logging
from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes
class AppendResults(object):
APPEND_FAILURE_QUEUE_FULL = -1
APPEND_FAILURE_QUEUE_DISABLED = -2
APPEND_SUCCESS = 0
class OfflineRequestQueue(list):
_logger = logging.getLogger(__name__)
def __init__(self, max_size, drop_behavior=DropBehaviorTypes.DROP_NEWEST):
if not isinstance(max_size, int) or not isinstance(drop_behavior, int):
self._logger.error("init: MaximumSize/DropBehavior must be integer.")
raise TypeError("MaximumSize/DropBehavior must be integer.")
if drop_behavior != DropBehaviorTypes.DROP_OLDEST and drop_behavior != DropBehaviorTypes.DROP_NEWEST:
self._logger.error("init: Drop behavior not supported.")
raise ValueError("Drop behavior not supported.")
list.__init__([])
self._drop_behavior = drop_behavior
# When self._maximumSize > 0, queue is limited
# When self._maximumSize == 0, queue is disabled
# When self._maximumSize < 0. queue is infinite
self._max_size = max_size
def _is_enabled(self):
return self._max_size != 0
def _need_drop_messages(self):
# Need to drop messages when:
# 1. Queue is limited and full
# 2. Queue is disabled
is_queue_full = len(self) >= self._max_size
is_queue_limited = self._max_size > 0
is_queue_disabled = not self._is_enabled()
return (is_queue_full and is_queue_limited) or is_queue_disabled
def set_behavior_drop_newest(self):
self._drop_behavior = DropBehaviorTypes.DROP_NEWEST
def set_behavior_drop_oldest(self):
self._drop_behavior = DropBehaviorTypes.DROP_OLDEST
# Override
# Append to a queue with a limited size.
# Return APPEND_SUCCESS if the append is successful
# Return APPEND_FAILURE_QUEUE_FULL if the append failed because the queue is full
# Return APPEND_FAILURE_QUEUE_DISABLED if the append failed because the queue is disabled
def append(self, data):
ret = AppendResults.APPEND_SUCCESS
if self._is_enabled():
if self._need_drop_messages():
# We should drop the newest
if DropBehaviorTypes.DROP_NEWEST == self._drop_behavior:
self._logger.warn("append: Full queue. Drop the newest: " + str(data))
ret = AppendResults.APPEND_FAILURE_QUEUE_FULL
# We should drop the oldest
else:
current_oldest = super(OfflineRequestQueue, self).pop(0)
self._logger.warn("append: Full queue. Drop the oldest: " + str(current_oldest))
super(OfflineRequestQueue, self).append(data)
ret = AppendResults.APPEND_FAILURE_QUEUE_FULL
else:
self._logger.debug("append: Add new element: " + str(data))
super(OfflineRequestQueue, self).append(data)
else:
self._logger.debug("append: Queue is disabled. Drop the message: " + str(data))
ret = AppendResults.APPEND_FAILURE_QUEUE_DISABLED
return ret

View File

@@ -0,0 +1,27 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class RequestTypes(object):
CONNECT = 0
DISCONNECT = 1
PUBLISH = 2
SUBSCRIBE = 3
UNSUBSCRIBE = 4
class QueueableRequest(object):
def __init__(self, type, data):
self.type = type
self.data = data # Can be a tuple

View File

@@ -0,0 +1,296 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import time
import logging
from threading import Thread
from threading import Event
from AWSIoTPythonSDK.core.protocol.internal.events import EventTypes
from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids
from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus
from AWSIoTPythonSDK.core.protocol.internal.queues import OfflineRequestQueue
from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes
from AWSIoTPythonSDK.core.protocol.paho.client import topic_matches_sub
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_DRAINING_INTERNAL_SEC
class EventProducer(object):
_logger = logging.getLogger(__name__)
def __init__(self, cv, event_queue):
self._cv = cv
self._event_queue = event_queue
def on_connect(self, client, user_data, flags, rc):
self._add_to_queue(FixedEventMids.CONNACK_MID, EventTypes.CONNACK, rc)
self._logger.debug("Produced [connack] event")
def on_disconnect(self, client, user_data, rc):
self._add_to_queue(FixedEventMids.DISCONNECT_MID, EventTypes.DISCONNECT, rc)
self._logger.debug("Produced [disconnect] event")
def on_publish(self, client, user_data, mid):
self._add_to_queue(mid, EventTypes.PUBACK, None)
self._logger.debug("Produced [puback] event")
def on_subscribe(self, client, user_data, mid, granted_qos):
self._add_to_queue(mid, EventTypes.SUBACK, granted_qos)
self._logger.debug("Produced [suback] event")
def on_unsubscribe(self, client, user_data, mid):
self._add_to_queue(mid, EventTypes.UNSUBACK, None)
self._logger.debug("Produced [unsuback] event")
def on_message(self, client, user_data, message):
self._add_to_queue(FixedEventMids.MESSAGE_MID, EventTypes.MESSAGE, message)
self._logger.debug("Produced [message] event")
def _add_to_queue(self, mid, event_type, data):
with self._cv:
self._event_queue.put((mid, event_type, data))
self._cv.notify()
class EventConsumer(object):
MAX_DISPATCH_INTERNAL_SEC = 0.01
_logger = logging.getLogger(__name__)
def __init__(self, cv, event_queue, internal_async_client,
subscription_manager, offline_requests_manager, client_status):
self._cv = cv
self._event_queue = event_queue
self._internal_async_client = internal_async_client
self._subscription_manager = subscription_manager
self._offline_requests_manager = offline_requests_manager
self._client_status = client_status
self._is_running = False
self._draining_interval_sec = DEFAULT_DRAINING_INTERNAL_SEC
self._dispatch_methods = {
EventTypes.CONNACK : self._dispatch_connack,
EventTypes.DISCONNECT : self._dispatch_disconnect,
EventTypes.PUBACK : self._dispatch_puback,
EventTypes.SUBACK : self._dispatch_suback,
EventTypes.UNSUBACK : self._dispatch_unsuback,
EventTypes.MESSAGE : self._dispatch_message
}
self._offline_request_handlers = {
RequestTypes.PUBLISH : self._handle_offline_publish,
RequestTypes.SUBSCRIBE : self._handle_offline_subscribe,
RequestTypes.UNSUBSCRIBE : self._handle_offline_unsubscribe
}
self._stopper = Event()
def update_offline_requests_manager(self, offline_requests_manager):
self._offline_requests_manager = offline_requests_manager
def update_draining_interval_sec(self, draining_interval_sec):
self._draining_interval_sec = draining_interval_sec
def get_draining_interval_sec(self):
return self._draining_interval_sec
def is_running(self):
return self._is_running
def start(self):
self._stopper.clear()
self._is_running = True
dispatch_events = Thread(target=self._dispatch)
dispatch_events.daemon = True
dispatch_events.start()
self._logger.debug("Event consuming thread started")
def stop(self):
if self._is_running:
self._is_running = False
self._clean_up()
self._logger.debug("Event consuming thread stopped")
def _clean_up(self):
self._logger.debug("Cleaning up before stopping event consuming")
with self._event_queue.mutex:
self._event_queue.queue.clear()
self._logger.debug("Event queue cleared")
self._internal_async_client.stop_background_network_io()
self._logger.debug("Network thread stopped")
self._internal_async_client.clean_up_event_callbacks()
self._logger.debug("Event callbacks cleared")
def wait_until_it_stops(self, timeout_sec):
self._logger.debug("Waiting for event consumer to completely stop")
return self._stopper.wait(timeout=timeout_sec)
def is_fully_stopped(self):
return self._stopper.is_set()
def _dispatch(self):
while self._is_running:
with self._cv:
if self._event_queue.empty():
self._cv.wait(self.MAX_DISPATCH_INTERNAL_SEC)
else:
while not self._event_queue.empty():
self._dispatch_one()
self._stopper.set()
self._logger.debug("Exiting dispatching loop...")
def _dispatch_one(self):
mid, event_type, data = self._event_queue.get()
if mid:
self._dispatch_methods[event_type](mid, data)
self._internal_async_client.invoke_event_callback(mid, data=data)
# We need to make sure disconnect event gets dispatched and then we stop the consumer
if self._need_to_stop_dispatching(mid):
self.stop()
def _need_to_stop_dispatching(self, mid):
status = self._client_status.get_status()
return (ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status) \
and mid == FixedEventMids.DISCONNECT_MID
def _dispatch_connack(self, mid, rc):
status = self._client_status.get_status()
self._logger.debug("Dispatching [connack] event")
if self._need_recover():
if ClientStatus.STABLE != status: # To avoid multiple connack dispatching
self._logger.debug("Has recovery job")
clean_up_debt = Thread(target=self._clean_up_debt)
clean_up_debt.start()
else:
self._logger.debug("No need for recovery")
self._client_status.set_status(ClientStatus.STABLE)
def _need_recover(self):
return self._subscription_manager.list_records() or self._offline_requests_manager.has_more()
def _clean_up_debt(self):
self._handle_resubscribe()
self._handle_draining()
self._client_status.set_status(ClientStatus.STABLE)
def _handle_resubscribe(self):
subscriptions = self._subscription_manager.list_records()
if subscriptions and not self._has_user_disconnect_request():
self._logger.debug("Start resubscribing")
self._client_status.set_status(ClientStatus.RESUBSCRIBE)
for topic, (qos, message_callback, ack_callback) in subscriptions:
if self._has_user_disconnect_request():
self._logger.debug("User disconnect detected")
break
self._internal_async_client.subscribe(topic, qos, ack_callback)
def _handle_draining(self):
if self._offline_requests_manager.has_more() and not self._has_user_disconnect_request():
self._logger.debug("Start draining")
self._client_status.set_status(ClientStatus.DRAINING)
while self._offline_requests_manager.has_more():
if self._has_user_disconnect_request():
self._logger.debug("User disconnect detected")
break
offline_request = self._offline_requests_manager.get_next()
if offline_request:
self._offline_request_handlers[offline_request.type](offline_request)
time.sleep(self._draining_interval_sec)
def _has_user_disconnect_request(self):
return ClientStatus.USER_DISCONNECT == self._client_status.get_status()
def _dispatch_disconnect(self, mid, rc):
self._logger.debug("Dispatching [disconnect] event")
status = self._client_status.get_status()
if ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status:
pass
else:
self._client_status.set_status(ClientStatus.ABNORMAL_DISCONNECT)
# For puback, suback and unsuback, ack callback invocation is handled in dispatch_one
# Do nothing in the event dispatching itself
def _dispatch_puback(self, mid, rc):
self._logger.debug("Dispatching [puback] event")
def _dispatch_suback(self, mid, rc):
self._logger.debug("Dispatching [suback] event")
def _dispatch_unsuback(self, mid, rc):
self._logger.debug("Dispatching [unsuback] event")
def _dispatch_message(self, mid, message):
self._logger.debug("Dispatching [message] event")
subscriptions = self._subscription_manager.list_records()
if subscriptions:
for topic, (qos, message_callback, _) in subscriptions:
if topic_matches_sub(topic, message.topic) and message_callback:
message_callback(None, None, message) # message_callback(client, userdata, message)
def _handle_offline_publish(self, request):
topic, payload, qos, retain = request.data
self._internal_async_client.publish(topic, payload, qos, retain)
self._logger.debug("Processed offline publish request")
def _handle_offline_subscribe(self, request):
topic, qos, message_callback, ack_callback = request.data
self._subscription_manager.add_record(topic, qos, message_callback, ack_callback)
self._internal_async_client.subscribe(topic, qos, ack_callback)
self._logger.debug("Processed offline subscribe request")
def _handle_offline_unsubscribe(self, request):
topic, ack_callback = request.data
self._subscription_manager.remove_record(topic)
self._internal_async_client.unsubscribe(topic, ack_callback)
self._logger.debug("Processed offline unsubscribe request")
class SubscriptionManager(object):
_logger = logging.getLogger(__name__)
def __init__(self):
self._subscription_map = dict()
def add_record(self, topic, qos, message_callback, ack_callback):
self._logger.debug("Adding a new subscription record: %s qos: %d", topic, qos)
self._subscription_map[topic] = qos, message_callback, ack_callback # message_callback and/or ack_callback could be None
def remove_record(self, topic):
self._logger.debug("Removing subscription record: %s", topic)
if self._subscription_map.get(topic): # Ignore topics that are never subscribed to
del self._subscription_map[topic]
else:
self._logger.warn("Removing attempt for non-exist subscription record: %s", topic)
def list_records(self):
return list(self._subscription_map.items())
class OfflineRequestsManager(object):
_logger = logging.getLogger(__name__)
def __init__(self, max_size, drop_behavior):
self._queue = OfflineRequestQueue(max_size, drop_behavior)
def has_more(self):
return len(self._queue) > 0
def add_one(self, request):
return self._queue.append(request)
def get_next(self):
if self.has_more():
return self._queue.pop(0)
else:
return None

View File

@@ -0,0 +1,373 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import AWSIoTPythonSDK
from AWSIoTPythonSDK.core.protocol.internal.clients import InternalAsyncMqttClient
from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatusContainer
from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus
from AWSIoTPythonSDK.core.protocol.internal.workers import EventProducer
from AWSIoTPythonSDK.core.protocol.internal.workers import EventConsumer
from AWSIoTPythonSDK.core.protocol.internal.workers import SubscriptionManager
from AWSIoTPythonSDK.core.protocol.internal.workers import OfflineRequestsManager
from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes
from AWSIoTPythonSDK.core.protocol.internal.requests import QueueableRequest
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_OPERATION_TIMEOUT_SEC
from AWSIoTPythonSDK.core.protocol.internal.defaults import METRICS_PREFIX
from AWSIoTPythonSDK.core.protocol.internal.defaults import ALPN_PROTCOLS
from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids
from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS
from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueFullException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueDisabledException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueFullException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueDisabledException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueFullException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueDisabledException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeTimeoutException
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeError
from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeTimeoutException
from AWSIoTPythonSDK.core.protocol.internal.queues import AppendResults
from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes
from AWSIoTPythonSDK.core.protocol.paho.client import MQTTv31
from threading import Condition
from threading import Event
import logging
import sys
if sys.version_info[0] < 3:
from Queue import Queue
else:
from queue import Queue
class MqttCore(object):
_logger = logging.getLogger(__name__)
def __init__(self, client_id, clean_session, protocol, use_wss):
self._use_wss = use_wss
self._username = ""
self._password = None
self._enable_metrics_collection = True
self._event_queue = Queue()
self._event_cv = Condition()
self._event_producer = EventProducer(self._event_cv, self._event_queue)
self._client_status = ClientStatusContainer()
self._internal_async_client = InternalAsyncMqttClient(client_id, clean_session, protocol, use_wss)
self._subscription_manager = SubscriptionManager()
self._offline_requests_manager = OfflineRequestsManager(-1, DropBehaviorTypes.DROP_NEWEST) # Infinite queue
self._event_consumer = EventConsumer(self._event_cv,
self._event_queue,
self._internal_async_client,
self._subscription_manager,
self._offline_requests_manager,
self._client_status)
self._connect_disconnect_timeout_sec = DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC
self._operation_timeout_sec = DEFAULT_OPERATION_TIMEOUT_SEC
self._init_offline_request_exceptions()
self._init_workers()
self._logger.info("MqttCore initialized")
self._logger.info("Client id: %s" % client_id)
self._logger.info("Protocol version: %s" % ("MQTTv3.1" if protocol == MQTTv31 else "MQTTv3.1.1"))
self._logger.info("Authentication type: %s" % ("SigV4 WebSocket" if use_wss else "TLSv1.2 certificate based Mutual Auth."))
def _init_offline_request_exceptions(self):
self._offline_request_queue_disabled_exceptions = {
RequestTypes.PUBLISH : publishQueueDisabledException(),
RequestTypes.SUBSCRIBE : subscribeQueueDisabledException(),
RequestTypes.UNSUBSCRIBE : unsubscribeQueueDisabledException()
}
self._offline_request_queue_full_exceptions = {
RequestTypes.PUBLISH : publishQueueFullException(),
RequestTypes.SUBSCRIBE : subscribeQueueFullException(),
RequestTypes.UNSUBSCRIBE : unsubscribeQueueFullException()
}
def _init_workers(self):
self._internal_async_client.register_internal_event_callbacks(self._event_producer.on_connect,
self._event_producer.on_disconnect,
self._event_producer.on_publish,
self._event_producer.on_subscribe,
self._event_producer.on_unsubscribe,
self._event_producer.on_message)
def _start_workers(self):
self._event_consumer.start()
def use_wss(self):
return self._use_wss
# Used for general message event reception
def on_message(self, message):
pass
# Used for general online event notification
def on_online(self):
pass
# Used for general offline event notification
def on_offline(self):
pass
def configure_cert_credentials(self, cert_credentials_provider):
self._logger.info("Configuring certificates...")
self._internal_async_client.set_cert_credentials_provider(cert_credentials_provider)
def configure_iam_credentials(self, iam_credentials_provider):
self._logger.info("Configuring custom IAM credentials...")
self._internal_async_client.set_iam_credentials_provider(iam_credentials_provider)
def configure_endpoint(self, endpoint_provider):
self._logger.info("Configuring endpoint...")
self._internal_async_client.set_endpoint_provider(endpoint_provider)
def configure_connect_disconnect_timeout_sec(self, connect_disconnect_timeout_sec):
self._logger.info("Configuring connect/disconnect time out: %f sec" % connect_disconnect_timeout_sec)
self._connect_disconnect_timeout_sec = connect_disconnect_timeout_sec
def configure_operation_timeout_sec(self, operation_timeout_sec):
self._logger.info("Configuring MQTT operation time out: %f sec" % operation_timeout_sec)
self._operation_timeout_sec = operation_timeout_sec
def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec):
self._logger.info("Configuring reconnect back off timing...")
self._logger.info("Base quiet time: %f sec" % base_reconnect_quiet_sec)
self._logger.info("Max quiet time: %f sec" % max_reconnect_quiet_sec)
self._logger.info("Stable connection time: %f sec" % stable_connection_sec)
self._internal_async_client.configure_reconnect_back_off(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec)
def configure_alpn_protocols(self):
self._logger.info("Configuring alpn protocols...")
self._internal_async_client.configure_alpn_protocols([ALPN_PROTCOLS])
def configure_last_will(self, topic, payload, qos, retain=False):
self._logger.info("Configuring last will...")
self._internal_async_client.configure_last_will(topic, payload, qos, retain)
def clear_last_will(self):
self._logger.info("Clearing last will...")
self._internal_async_client.clear_last_will()
def configure_username_password(self, username, password=None):
self._logger.info("Configuring username and password...")
self._username = username
self._password = password
def configure_socket_factory(self, socket_factory):
self._logger.info("Configuring socket factory...")
self._internal_async_client.set_socket_factory(socket_factory)
def enable_metrics_collection(self):
self._enable_metrics_collection = True
def disable_metrics_collection(self):
self._enable_metrics_collection = False
def configure_offline_requests_queue(self, max_size, drop_behavior):
self._logger.info("Configuring offline requests queueing: max queue size: %d", max_size)
self._offline_requests_manager = OfflineRequestsManager(max_size, drop_behavior)
self._event_consumer.update_offline_requests_manager(self._offline_requests_manager)
def configure_draining_interval_sec(self, draining_interval_sec):
self._logger.info("Configuring offline requests queue draining interval: %f sec", draining_interval_sec)
self._event_consumer.update_draining_interval_sec(draining_interval_sec)
def connect(self, keep_alive_sec):
self._logger.info("Performing sync connect...")
event = Event()
self.connect_async(keep_alive_sec, self._create_blocking_ack_callback(event))
if not event.wait(self._connect_disconnect_timeout_sec):
self._logger.error("Connect timed out")
raise connectTimeoutException()
return True
def connect_async(self, keep_alive_sec, ack_callback=None):
self._logger.info("Performing async connect...")
self._logger.info("Keep-alive: %f sec" % keep_alive_sec)
self._start_workers()
self._load_callbacks()
self._load_username_password()
try:
self._client_status.set_status(ClientStatus.CONNECT)
rc = self._internal_async_client.connect(keep_alive_sec, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Connect error: %d", rc)
raise connectError(rc)
except Exception as e:
# Provided any error in connect, we should clean up the threads that have been created
self._event_consumer.stop()
if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec):
self._logger.error("Time out in waiting for event consumer to stop")
else:
self._logger.debug("Event consumer stopped")
self._client_status.set_status(ClientStatus.IDLE)
raise e
return FixedEventMids.CONNACK_MID
def _load_callbacks(self):
self._logger.debug("Passing in general notification callbacks to internal client...")
self._internal_async_client.on_online = self.on_online
self._internal_async_client.on_offline = self.on_offline
self._internal_async_client.on_message = self.on_message
def _load_username_password(self):
username_candidate = self._username
if self._enable_metrics_collection:
username_candidate += METRICS_PREFIX
username_candidate += AWSIoTPythonSDK.__version__
self._internal_async_client.set_username_password(username_candidate, self._password)
def disconnect(self):
self._logger.info("Performing sync disconnect...")
event = Event()
self.disconnect_async(self._create_blocking_ack_callback(event))
if not event.wait(self._connect_disconnect_timeout_sec):
self._logger.error("Disconnect timed out")
raise disconnectTimeoutException()
if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec):
self._logger.error("Disconnect timed out in waiting for event consumer")
raise disconnectTimeoutException()
return True
def disconnect_async(self, ack_callback=None):
self._logger.info("Performing async disconnect...")
self._client_status.set_status(ClientStatus.USER_DISCONNECT)
rc = self._internal_async_client.disconnect(ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Disconnect error: %d", rc)
raise disconnectError(rc)
return FixedEventMids.DISCONNECT_MID
def publish(self, topic, payload, qos, retain=False):
self._logger.info("Performing sync publish...")
ret = False
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain))
else:
if qos > 0:
event = Event()
rc, mid = self._publish_async(topic, payload, qos, retain, self._create_blocking_ack_callback(event))
if not event.wait(self._operation_timeout_sec):
self._internal_async_client.remove_event_callback(mid)
self._logger.error("Publish timed out")
raise publishTimeoutException()
else:
self._publish_async(topic, payload, qos, retain)
ret = True
return ret
def publish_async(self, topic, payload, qos, retain=False, ack_callback=None):
self._logger.info("Performing async publish...")
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain))
return FixedEventMids.QUEUED_MID
else:
rc, mid = self._publish_async(topic, payload, qos, retain, ack_callback)
return mid
def _publish_async(self, topic, payload, qos, retain=False, ack_callback=None):
rc, mid = self._internal_async_client.publish(topic, payload, qos, retain, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Publish error: %d", rc)
raise publishError(rc)
return rc, mid
def subscribe(self, topic, qos, message_callback=None):
self._logger.info("Performing sync subscribe...")
ret = False
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, None))
else:
event = Event()
rc, mid = self._subscribe_async(topic, qos, self._create_blocking_ack_callback(event), message_callback)
if not event.wait(self._operation_timeout_sec):
self._internal_async_client.remove_event_callback(mid)
self._logger.error("Subscribe timed out")
raise subscribeTimeoutException()
ret = True
return ret
def subscribe_async(self, topic, qos, ack_callback=None, message_callback=None):
self._logger.info("Performing async subscribe...")
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, ack_callback))
return FixedEventMids.QUEUED_MID
else:
rc, mid = self._subscribe_async(topic, qos, ack_callback, message_callback)
return mid
def _subscribe_async(self, topic, qos, ack_callback=None, message_callback=None):
self._subscription_manager.add_record(topic, qos, message_callback, ack_callback)
rc, mid = self._internal_async_client.subscribe(topic, qos, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Subscribe error: %d", rc)
raise subscribeError(rc)
return rc, mid
def unsubscribe(self, topic):
self._logger.info("Performing sync unsubscribe...")
ret = False
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, None))
else:
event = Event()
rc, mid = self._unsubscribe_async(topic, self._create_blocking_ack_callback(event))
if not event.wait(self._operation_timeout_sec):
self._internal_async_client.remove_event_callback(mid)
self._logger.error("Unsubscribe timed out")
raise unsubscribeTimeoutException()
ret = True
return ret
def unsubscribe_async(self, topic, ack_callback=None):
self._logger.info("Performing async unsubscribe...")
if ClientStatus.STABLE != self._client_status.get_status():
self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, ack_callback))
return FixedEventMids.QUEUED_MID
else:
rc, mid = self._unsubscribe_async(topic, ack_callback)
return mid
def _unsubscribe_async(self, topic, ack_callback=None):
self._subscription_manager.remove_record(topic)
rc, mid = self._internal_async_client.unsubscribe(topic, ack_callback)
if MQTT_ERR_SUCCESS != rc:
self._logger.error("Unsubscribe error: %d", rc)
raise unsubscribeError(rc)
return rc, mid
def _create_blocking_ack_callback(self, event):
def ack_callback(mid, data=None):
event.set()
return ack_callback
def _handle_offline_request(self, type, data):
self._logger.info("Offline request detected!")
offline_request = QueueableRequest(type, data)
append_result = self._offline_requests_manager.add_one(offline_request)
if AppendResults.APPEND_FAILURE_QUEUE_DISABLED == append_result:
self._logger.error("Offline request queue has been disabled")
raise self._offline_request_queue_disabled_exceptions[type]
if AppendResults.APPEND_FAILURE_QUEUE_FULL == append_result:
self._logger.error("Offline request queue is full")
raise self._offline_request_queue_full_exceptions[type]

View File

@@ -0,0 +1,430 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import json
import logging
import uuid
from threading import Timer, Lock, Thread
class _shadowRequestToken:
URN_PREFIX_LENGTH = 9
def getNextToken(self):
return uuid.uuid4().urn[self.URN_PREFIX_LENGTH:] # We only need the uuid digits, not the urn prefix
class _basicJSONParser:
def setString(self, srcString):
self._rawString = srcString
self._dictionObject = None
def regenerateString(self):
return json.dumps(self._dictionaryObject)
def getAttributeValue(self, srcAttributeKey):
return self._dictionaryObject.get(srcAttributeKey)
def setAttributeValue(self, srcAttributeKey, srcAttributeValue):
self._dictionaryObject[srcAttributeKey] = srcAttributeValue
def validateJSON(self):
try:
self._dictionaryObject = json.loads(self._rawString)
except ValueError:
return False
return True
class deviceShadow:
_logger = logging.getLogger(__name__)
def __init__(self, srcShadowName, srcIsPersistentSubscribe, srcShadowManager):
"""
The class that denotes a local/client-side device shadow instance.
Users can perform shadow operations on this instance to retrieve and modify the
corresponding shadow JSON document in AWS IoT Cloud. The following shadow operations
are available:
- Get
- Update
- Delete
- Listen on delta
- Cancel listening on delta
This is returned from :code:`AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTShadowClient.createShadowWithName` function call.
No need to call directly from user scripts.
"""
if srcShadowName is None or srcIsPersistentSubscribe is None or srcShadowManager is None:
raise TypeError("None type inputs detected.")
self._shadowName = srcShadowName
# Tool handler
self._shadowManagerHandler = srcShadowManager
self._basicJSONParserHandler = _basicJSONParser()
self._tokenHandler = _shadowRequestToken()
# Properties
self._isPersistentSubscribe = srcIsPersistentSubscribe
self._lastVersionInSync = -1 # -1 means not initialized
self._isGetSubscribed = False
self._isUpdateSubscribed = False
self._isDeleteSubscribed = False
self._shadowSubscribeCallbackTable = dict()
self._shadowSubscribeCallbackTable["get"] = None
self._shadowSubscribeCallbackTable["delete"] = None
self._shadowSubscribeCallbackTable["update"] = None
self._shadowSubscribeCallbackTable["delta"] = None
self._shadowSubscribeStatusTable = dict()
self._shadowSubscribeStatusTable["get"] = 0
self._shadowSubscribeStatusTable["delete"] = 0
self._shadowSubscribeStatusTable["update"] = 0
self._tokenPool = dict()
self._dataStructureLock = Lock()
def _doNonPersistentUnsubscribe(self, currentAction):
self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, currentAction)
self._logger.info("Unsubscribed to " + currentAction + " accepted/rejected topics for deviceShadow: " + self._shadowName)
def generalCallback(self, client, userdata, message):
# In Py3.x, message.payload comes in as a bytes(string)
# json.loads needs a string input
with self._dataStructureLock:
currentTopic = message.topic
currentAction = self._parseTopicAction(currentTopic) # get/delete/update/delta
currentType = self._parseTopicType(currentTopic) # accepted/rejected/delta
payloadUTF8String = message.payload.decode('utf-8')
# get/delete/update: Need to deal with token, timer and unsubscribe
if currentAction in ["get", "delete", "update"]:
# Check for token
self._basicJSONParserHandler.setString(payloadUTF8String)
if self._basicJSONParserHandler.validateJSON(): # Filter out invalid JSON
currentToken = self._basicJSONParserHandler.getAttributeValue(u"clientToken")
if currentToken is not None:
self._logger.debug("shadow message clientToken: " + currentToken)
if currentToken is not None and currentToken in self._tokenPool.keys(): # Filter out JSON without the desired token
# Sync local version when it is an accepted response
self._logger.debug("Token is in the pool. Type: " + currentType)
if currentType == "accepted":
incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version")
# If it is get/update accepted response, we need to sync the local version
if incomingVersion is not None and incomingVersion > self._lastVersionInSync and currentAction != "delete":
self._lastVersionInSync = incomingVersion
# If it is a delete accepted, we need to reset the version
else:
self._lastVersionInSync = -1 # The version will always be synced for the next incoming delta/GU-accepted response
# Cancel the timer and clear the token
self._tokenPool[currentToken].cancel()
del self._tokenPool[currentToken]
# Need to unsubscribe?
self._shadowSubscribeStatusTable[currentAction] -= 1
if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(currentAction) <= 0:
self._shadowSubscribeStatusTable[currentAction] = 0
processNonPersistentUnsubscribe = Thread(target=self._doNonPersistentUnsubscribe, args=[currentAction])
processNonPersistentUnsubscribe.start()
# Custom callback
if self._shadowSubscribeCallbackTable.get(currentAction) is not None:
processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, currentToken])
processCustomCallback.start()
# delta: Watch for version
else:
currentType += "/" + self._parseTopicShadowName(currentTopic)
# Sync local version
self._basicJSONParserHandler.setString(payloadUTF8String)
if self._basicJSONParserHandler.validateJSON(): # Filter out JSON without version
incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version")
if incomingVersion is not None and incomingVersion > self._lastVersionInSync:
self._lastVersionInSync = incomingVersion
# Custom callback
if self._shadowSubscribeCallbackTable.get(currentAction) is not None:
processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, None])
processCustomCallback.start()
def _parseTopicAction(self, srcTopic):
ret = None
fragments = srcTopic.split('/')
if fragments[5] == "delta":
ret = "delta"
else:
ret = fragments[4]
return ret
def _parseTopicType(self, srcTopic):
fragments = srcTopic.split('/')
return fragments[5]
def _parseTopicShadowName(self, srcTopic):
fragments = srcTopic.split('/')
return fragments[2]
def _timerHandler(self, srcActionName, srcToken):
with self._dataStructureLock:
# Don't crash if we try to remove an unknown token
if srcToken not in self._tokenPool:
self._logger.warn('Tried to remove non-existent token from pool: %s' % str(srcToken))
return
# Remove the token
del self._tokenPool[srcToken]
# Need to unsubscribe?
self._shadowSubscribeStatusTable[srcActionName] -= 1
if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(srcActionName) <= 0:
self._shadowSubscribeStatusTable[srcActionName] = 0
self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, srcActionName)
# Notify time-out issue
if self._shadowSubscribeCallbackTable.get(srcActionName) is not None:
self._logger.info("Shadow request with token: " + str(srcToken) + " has timed out.")
self._shadowSubscribeCallbackTable[srcActionName]("REQUEST TIME OUT", "timeout", srcToken)
def shadowGet(self, srcCallback, srcTimeout):
"""
**Description**
Retrieve the device shadow JSON document from AWS IoT by publishing an empty JSON document to the
corresponding shadow topics. Shadow response topics will be subscribed to receive responses from
AWS IoT regarding the result of the get operation. Retrieved shadow JSON document will be available
in the registered callback. If no response is received within the provided timeout, a timeout
notification will be passed into the registered callback.
**Syntax**
.. code:: python
# Retrieve the shadow JSON document from AWS IoT, with a timeout set to 5 seconds
BotShadow.shadowGet(customCallback, 5)
**Parameters**
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
*srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout,
a timeout notification will be generated and put into the registered callback to notify users.
**Returns**
The token used for tracing in this shadow request.
"""
with self._dataStructureLock:
# Update callback data structure
self._shadowSubscribeCallbackTable["get"] = srcCallback
# Update number of pending feedback
self._shadowSubscribeStatusTable["get"] += 1
# clientToken
currentToken = self._tokenHandler.getNextToken()
self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["get", currentToken])
self._basicJSONParserHandler.setString("{}")
self._basicJSONParserHandler.validateJSON()
self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken)
currentPayload = self._basicJSONParserHandler.regenerateString()
# Two subscriptions
if not self._isPersistentSubscribe or not self._isGetSubscribed:
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "get", self.generalCallback)
self._isGetSubscribed = True
self._logger.info("Subscribed to get accepted/rejected topics for deviceShadow: " + self._shadowName)
# One publish
self._shadowManagerHandler.basicShadowPublish(self._shadowName, "get", currentPayload)
# Start the timer
self._tokenPool[currentToken].start()
return currentToken
def shadowDelete(self, srcCallback, srcTimeout):
"""
**Description**
Delete the device shadow from AWS IoT by publishing an empty JSON document to the corresponding
shadow topics. Shadow response topics will be subscribed to receive responses from AWS IoT
regarding the result of the get operation. Responses will be available in the registered callback.
If no response is received within the provided timeout, a timeout notification will be passed into
the registered callback.
**Syntax**
.. code:: python
# Delete the device shadow from AWS IoT, with a timeout set to 5 seconds
BotShadow.shadowDelete(customCallback, 5)
**Parameters**
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
*srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout,
a timeout notification will be generated and put into the registered callback to notify users.
**Returns**
The token used for tracing in this shadow request.
"""
with self._dataStructureLock:
# Update callback data structure
self._shadowSubscribeCallbackTable["delete"] = srcCallback
# Update number of pending feedback
self._shadowSubscribeStatusTable["delete"] += 1
# clientToken
currentToken = self._tokenHandler.getNextToken()
self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["delete", currentToken])
self._basicJSONParserHandler.setString("{}")
self._basicJSONParserHandler.validateJSON()
self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken)
currentPayload = self._basicJSONParserHandler.regenerateString()
# Two subscriptions
if not self._isPersistentSubscribe or not self._isDeleteSubscribed:
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delete", self.generalCallback)
self._isDeleteSubscribed = True
self._logger.info("Subscribed to delete accepted/rejected topics for deviceShadow: " + self._shadowName)
# One publish
self._shadowManagerHandler.basicShadowPublish(self._shadowName, "delete", currentPayload)
# Start the timer
self._tokenPool[currentToken].start()
return currentToken
def shadowUpdate(self, srcJSONPayload, srcCallback, srcTimeout):
"""
**Description**
Update the device shadow JSON document string from AWS IoT by publishing the provided JSON
document to the corresponding shadow topics. Shadow response topics will be subscribed to
receive responses from AWS IoT regarding the result of the get operation. Response will be
available in the registered callback. If no response is received within the provided timeout,
a timeout notification will be passed into the registered callback.
**Syntax**
.. code:: python
# Update the shadow JSON document from AWS IoT, with a timeout set to 5 seconds
BotShadow.shadowUpdate(newShadowJSONDocumentString, customCallback, 5)
**Parameters**
*srcJSONPayload* - JSON document string used to update shadow JSON document in AWS IoT.
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
*srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout,
a timeout notification will be generated and put into the registered callback to notify users.
**Returns**
The token used for tracing in this shadow request.
"""
# Validate JSON
self._basicJSONParserHandler.setString(srcJSONPayload)
if self._basicJSONParserHandler.validateJSON():
with self._dataStructureLock:
# clientToken
currentToken = self._tokenHandler.getNextToken()
self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["update", currentToken])
self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken)
JSONPayloadWithToken = self._basicJSONParserHandler.regenerateString()
# Update callback data structure
self._shadowSubscribeCallbackTable["update"] = srcCallback
# Update number of pending feedback
self._shadowSubscribeStatusTable["update"] += 1
# Two subscriptions
if not self._isPersistentSubscribe or not self._isUpdateSubscribed:
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "update", self.generalCallback)
self._isUpdateSubscribed = True
self._logger.info("Subscribed to update accepted/rejected topics for deviceShadow: " + self._shadowName)
# One publish
self._shadowManagerHandler.basicShadowPublish(self._shadowName, "update", JSONPayloadWithToken)
# Start the timer
self._tokenPool[currentToken].start()
else:
raise ValueError("Invalid JSON file.")
return currentToken
def shadowRegisterDeltaCallback(self, srcCallback):
"""
**Description**
Listen on delta topics for this device shadow by subscribing to delta topics. Whenever there
is a difference between the desired and reported state, the registered callback will be called
and the delta payload will be available in the callback.
**Syntax**
.. code:: python
# Listen on delta topics for BotShadow
BotShadow.shadowRegisterDeltaCallback(customCallback)
**Parameters**
*srcCallback* - Function to be called when the response for this shadow request comes back. Should
be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the
JSON document returned, :code:`responseStatus` indicates whether the request has been accepted,
rejected or is a delta message, :code:`token` is the token used for tracing in this request.
**Returns**
None
"""
with self._dataStructureLock:
# Update callback data structure
self._shadowSubscribeCallbackTable["delta"] = srcCallback
# One subscription
self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delta", self.generalCallback)
self._logger.info("Subscribed to delta topic for deviceShadow: " + self._shadowName)
def shadowUnregisterDeltaCallback(self):
"""
**Description**
Cancel listening on delta topics for this device shadow by unsubscribing to delta topics. There will
be no delta messages received after this API call even though there is a difference between the
desired and reported state.
**Syntax**
.. code:: python
# Cancel listening on delta topics for BotShadow
BotShadow.shadowUnregisterDeltaCallback()
**Parameters**
None
**Returns**
None
"""
with self._dataStructureLock:
# Update callback data structure
del self._shadowSubscribeCallbackTable["delta"]
# One unsubscription
self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, "delta")
self._logger.info("Unsubscribed to delta topics for deviceShadow: " + self._shadowName)

View File

@@ -0,0 +1,83 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import logging
import time
from threading import Lock
class _shadowAction:
_actionType = ["get", "update", "delete", "delta"]
def __init__(self, srcShadowName, srcActionName):
if srcActionName is None or srcActionName not in self._actionType:
raise TypeError("Unsupported shadow action.")
self._shadowName = srcShadowName
self._actionName = srcActionName
self.isDelta = srcActionName == "delta"
if self.isDelta:
self._topicDelta = "$aws/things/" + str(self._shadowName) + "/shadow/update/delta"
else:
self._topicGeneral = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName)
self._topicAccept = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/accepted"
self._topicReject = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/rejected"
def getTopicGeneral(self):
return self._topicGeneral
def getTopicAccept(self):
return self._topicAccept
def getTopicReject(self):
return self._topicReject
def getTopicDelta(self):
return self._topicDelta
class shadowManager:
_logger = logging.getLogger(__name__)
def __init__(self, srcMQTTCore):
# Load in mqttCore
if srcMQTTCore is None:
raise TypeError("None type inputs detected.")
self._mqttCoreHandler = srcMQTTCore
self._shadowSubUnsubOperationLock = Lock()
def basicShadowPublish(self, srcShadowName, srcShadowAction, srcPayload):
currentShadowAction = _shadowAction(srcShadowName, srcShadowAction)
self._mqttCoreHandler.publish(currentShadowAction.getTopicGeneral(), srcPayload, 0, False)
def basicShadowSubscribe(self, srcShadowName, srcShadowAction, srcCallback):
with self._shadowSubUnsubOperationLock:
currentShadowAction = _shadowAction(srcShadowName, srcShadowAction)
if currentShadowAction.isDelta:
self._mqttCoreHandler.subscribe(currentShadowAction.getTopicDelta(), 0, srcCallback)
else:
self._mqttCoreHandler.subscribe(currentShadowAction.getTopicAccept(), 0, srcCallback)
self._mqttCoreHandler.subscribe(currentShadowAction.getTopicReject(), 0, srcCallback)
time.sleep(2)
def basicShadowUnsubscribe(self, srcShadowName, srcShadowAction):
with self._shadowSubUnsubOperationLock:
currentShadowAction = _shadowAction(srcShadowName, srcShadowAction)
if currentShadowAction.isDelta:
self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicDelta())
else:
self._logger.debug(currentShadowAction.getTopicAccept())
self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicAccept())
self._logger.debug(currentShadowAction.getTopicReject())
self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicReject())

View File

@@ -0,0 +1,19 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class DropBehaviorTypes(object):
DROP_OLDEST = 0
DROP_NEWEST = 1

View File

@@ -0,0 +1,92 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class CredentialsProvider(object):
def __init__(self):
self._ca_path = ""
def set_ca_path(self, ca_path):
self._ca_path = ca_path
def get_ca_path(self):
return self._ca_path
class CertificateCredentialsProvider(CredentialsProvider):
def __init__(self):
CredentialsProvider.__init__(self)
self._cert_path = ""
self._key_path = ""
def set_cert_path(self,cert_path):
self._cert_path = cert_path
def set_key_path(self, key_path):
self._key_path = key_path
def get_cert_path(self):
return self._cert_path
def get_key_path(self):
return self._key_path
class IAMCredentialsProvider(CredentialsProvider):
def __init__(self):
CredentialsProvider.__init__(self)
self._aws_access_key_id = ""
self._aws_secret_access_key = ""
self._aws_session_token = ""
def set_access_key_id(self, access_key_id):
self._aws_access_key_id = access_key_id
def set_secret_access_key(self, secret_access_key):
self._aws_secret_access_key = secret_access_key
def set_session_token(self, session_token):
self._aws_session_token = session_token
def get_access_key_id(self):
return self._aws_access_key_id
def get_secret_access_key(self):
return self._aws_secret_access_key
def get_session_token(self):
return self._aws_session_token
class EndpointProvider(object):
def __init__(self):
self._host = ""
self._port = -1
def set_host(self, host):
self._host = host
def set_port(self, port):
self._port = port
def get_host(self):
return self._host
def get_port(self):
return self._port

View File

@@ -0,0 +1,153 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
import AWSIoTPythonSDK.exception.operationTimeoutException as operationTimeoutException
import AWSIoTPythonSDK.exception.operationError as operationError
# Serial Exception
class acceptTimeoutException(Exception):
def __init__(self, msg="Accept Timeout"):
self.message = msg
# MQTT Operation Timeout Exception
class connectTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Connect Timeout"):
self.message = msg
class disconnectTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Disconnect Timeout"):
self.message = msg
class publishTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Publish Timeout"):
self.message = msg
class subscribeTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Subscribe Timeout"):
self.message = msg
class unsubscribeTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, msg="Unsubscribe Timeout"):
self.message = msg
# MQTT Operation Error
class connectError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Connect Error: " + str(errorCode)
class disconnectError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Disconnect Error: " + str(errorCode)
class publishError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Publish Error: " + str(errorCode)
class publishQueueFullException(operationError.operationError):
def __init__(self):
self.message = "Internal Publish Queue Full"
class publishQueueDisabledException(operationError.operationError):
def __init__(self):
self.message = "Offline publish request dropped because queueing is disabled"
class subscribeError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Subscribe Error: " + str(errorCode)
class subscribeQueueFullException(operationError.operationError):
def __init__(self):
self.message = "Internal Subscribe Queue Full"
class subscribeQueueDisabledException(operationError.operationError):
def __init__(self):
self.message = "Offline subscribe request dropped because queueing is disabled"
class unsubscribeError(operationError.operationError):
def __init__(self, errorCode):
self.message = "Unsubscribe Error: " + str(errorCode)
class unsubscribeQueueFullException(operationError.operationError):
def __init__(self):
self.message = "Internal Unsubscribe Queue Full"
class unsubscribeQueueDisabledException(operationError.operationError):
def __init__(self):
self.message = "Offline unsubscribe request dropped because queueing is disabled"
# Websocket Error
class wssNoKeyInEnvironmentError(operationError.operationError):
def __init__(self):
self.message = "No AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY detected in $ENV."
class wssHandShakeError(operationError.operationError):
def __init__(self):
self.message = "Error in WSS handshake."
# Greengrass Discovery Error
class DiscoveryDataNotFoundException(operationError.operationError):
def __init__(self):
self.message = "No discovery data found"
class DiscoveryTimeoutException(operationTimeoutException.operationTimeoutException):
def __init__(self, message="Discovery request timed out"):
self.message = message
class DiscoveryInvalidRequestException(operationError.operationError):
def __init__(self):
self.message = "Invalid discovery request"
class DiscoveryUnauthorizedException(operationError.operationError):
def __init__(self):
self.message = "Discovery request not authorized"
class DiscoveryThrottlingException(operationError.operationError):
def __init__(self):
self.message = "Too many discovery requests"
class DiscoveryFailure(operationError.operationError):
def __init__(self, message):
self.message = message
# Client Error
class ClientError(Exception):
def __init__(self, message):
self.message = message

View File

@@ -0,0 +1,19 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class operationError(Exception):
def __init__(self, msg="Operation Error"):
self.message = msg

View File

@@ -0,0 +1,19 @@
# /*
# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# *
# * Licensed under the Apache License, Version 2.0 (the "License").
# * You may not use this file except in compliance with the License.
# * A copy of the License is located at
# *
# * http://aws.amazon.com/apache2.0
# *
# * or in the "license" file accompanying this file. This file is distributed
# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# * express or implied. See the License for the specific language governing
# * permissions and limitations under the License.
# */
class operationTimeoutException(Exception):
def __init__(self, msg="Operation Timeout"):
self.message = msg

View File

@@ -0,0 +1,2 @@
[metadata]
description-file = README.rst

View File

@@ -0,0 +1,34 @@
import sys
sys.path.insert(0, 'AWSIoTPythonSDK')
import AWSIoTPythonSDK
currentVersion = AWSIoTPythonSDK.__version__
from distutils.core import setup
setup(
name = 'AWSIoTPythonSDK',
packages=['AWSIoTPythonSDK', 'AWSIoTPythonSDK.core',
'AWSIoTPythonSDK.core.util', 'AWSIoTPythonSDK.core.shadow', 'AWSIoTPythonSDK.core.protocol',
'AWSIoTPythonSDK.core.jobs',
'AWSIoTPythonSDK.core.protocol.paho', 'AWSIoTPythonSDK.core.protocol.internal',
'AWSIoTPythonSDK.core.protocol.connection', 'AWSIoTPythonSDK.core.greengrass',
'AWSIoTPythonSDK.core.greengrass.discovery', 'AWSIoTPythonSDK.exception'],
version = currentVersion,
description = 'SDK for connecting to AWS IoT using Python.',
author = 'Amazon Web Service',
author_email = '',
url = 'https://github.com/aws/aws-iot-device-sdk-python.git',
download_url = 'https://s3.amazonaws.com/aws-iot-device-sdk-python/aws-iot-device-sdk-python-latest.zip',
keywords = ['aws', 'iot', 'mqtt'],
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5"
]
)

119
data_points.py Normal file
View File

@@ -0,0 +1,119 @@
from datetime import datetime
import time
import minimalmodbus
from pycomm.ab_comm.clx import Driver as clx
from pycomm.cip.cip_base import CommError, DataError
class DataPoint(object):
def __init__(self,changeThreshold=0,guaranteed=3600, name="datapoint",alertThreshold=[],alertCondition=[],alertResponse=[],alertContact=[]):
self.value = None
self.lastvalue = None
self.lastsend = 0
self.changeThreshold = changeThreshold
self.guaranteed = guaranteed
self.name = name
self.alerted = False
self.alertThreshold = alertThreshold
self.alertCondition = alertCondition
self.alertResponse = alertResponse
self.alertContact = alertContact
def checkSend(self,value):
if value != self.lastvalue or (time.time() - self.lastsend > self.guaranteed):
self.lastsend = time.time()
self.lastvalue = value
return True
else:
return False
def checkAlert(self,value):
conditions = {
"gt": "value > threshold",
"lt": "value < threshold",
"eq": "value == threshold",
"gte": "value >= threshold",
"lte": "value <= threshold",
"not": "value != threshold"
}
for thres,cond in zip(self.alertThreshold,self.alertCondition):
#check value for alert threshold
evalVars = {
"value": value,
"threshold": thres
}
func = conditions.get(cond)
if func == None:
print("Not an available function: {}".format(cond))
else:
if eval(func, evalVars):
return {"message":"Read value for {} is {} threshold value {}".format(self.name,value,thres)}
else:
self.alerted = False
return None
class modbusDataPoint(DataPoint):
def __init__(self,changeThreshold,guaranteed,name,register=1,baud=19200,stopBits=1,parity=None, device='/dev/ttyS0'):
DataPoint.__init__(self,changeThreshold,guaranteed,name)
self.register = register
self.baud = baud
self.stopBits = stopBits
self.parity = parity
self.device = device
def read(self):
pass
def write(self):
pass
class plcDataPoint(DataPoint):
def __init__(self,changeThreshold,guaranteed,name,plcIP='192.168.1.10',plcType='Micro800',tag=None,alertThreshold=[],alertCondition=[],alertResponse=[],alertContact=[]):
DataPoint.__init__(self,changeThreshold,guaranteed,name,alertThreshold,alertCondition,alertResponse,alertContact)
self.plcIP = plcIP
self.plcType = plcType
self.tag = tag
def read(self):
direct_connect = self.plcType == "Micro800"
c = clx()
try:
if c.open(self.plcIP,direct_connect):
try:
val = c.read_tag(self.tag)
c.close()
alertMessage = self.checkAlert(val[0])
return val[0], alertMessage
except DataError as derr:
print("Error: {}".format(derr))
c.close()
except CommError as cerr:
print("Error: {}".format(cerr))
return False
def write(self):
pass
class currentDataPoint(DataPoint):
def __init__(self,changeThreshold,guaranteed,name, euMin=0, euMax=100, rawMin=4, rawMax=20):
DataPoint.__init__(self,changeThreshold,guaranteed,name)
self.euMin = euMin
self.euMax = euMax
self.rawMin = rawMin
self.rawMax = rawMax
def read(self):
pass
class voltageDataPoint(DataPoint):
def __init__(self,changeThreshold,guaranteed,name, euMin=0, euMax=100, rawMin=0, rawMax=10):
DataPoint.__init__(self,changeThreshold,guaranteed,name)
self.euMin = euMin
self.euMax = euMax
self.rawMin = rawMin
self.rawMax = rawMax
def read(self):
pass

BIN
data_points.pyc Normal file

Binary file not shown.

27
device1Cert.key Normal file
View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAtle0G78fyQE4l1IoYnSp7iaUzmFpYc1tJkP2KHxmHxXwkvqI
S+gRap582ngcBccXaFnG48+ooagmcv5DQaaRSYrdKP6XFNgd86jwPEDWHWFKg7/C
JVQpMauzQd8DUIX7hcMuS0jD2BDfAxVIoZCOrT1ow5fbRb3tKKltN4szLERl6QLJ
89OjT8P+ZpuW02Zpw0pjMKSLeCdjjBMsa8ELuRQGwzTS/+cfXlN21zemDzUv/Udp
VNH+tYbRGO/kxfm1k9WVOZiidARjG/bTWMlJl71Li6K0mMqEv3Qta3XArhu2/GC0
E7JLtPnKNQonieEmlXyNR26kHEcUAgOfxy/J+QIDAQABAoIBAHhCOZg/IiB4fLFY
Tyg4F0bpDSVcG5uUV5NwKS4kdVm1J5hYQYIGiU4PPvr7UkgBOY/p/gGLmIUdmFYp
GYR37cRaIGiEGHJ34rErz119yXlRDEr+MnZaHl0TB8O+6Lm3094xjxu53uhmoB6x
b9iWtXLOWIT/Z2+ExqAVteF3HgXn7LE4B/bzZ/9571M8+DRcMMxUhh5+aFxldwY4
AJa9JgIiBnRoRUO0U9c6tkIG8M6Xq5uFGMnd1CZMEz9QCKAbzxcH8eVy2R/k/hc/
N+g1Zx8TxzpKYmaFPk+vZnt9AVcKxadjXiDSFPV4xZ5fpnoIO9mpw6he1sqv5AVB
Ni8hcDECgYEA6CIF7lgTSWCUGPCDF1ZrQykrOdCD86nthuzWU4wRmBE1KQK+RSt0
gNM38yDOtHE3L04iaSC9dqmGbvae62W3AijOA95G0eY6nJP4ia/e/sfbkI4BXOAX
5k5m0ZV9HMNAMpthVtrf7ZkFPF7+suYp8Eoc2qo1hPY2+PnjPmplKc0CgYEAyRcl
7mI6Kk9ciZYTsSeVjPTQszLEga9nqoFhfhRGdFFC/LmwZ1gGSFuag30wWMbf0Chi
rDyLzduccfacSdkPEKAuKThe7iI63XHsWMQrgwi5I84+k4rDR9nhjAezrrbfZfhu
S2xEBWB6OX0yFbeVFfTqXBlzScuiymwEwoSBhN0CgYEAlWjAtIYP89yrtdmoJq9C
3rlyzwV8yKqI7Z0m3iN7d4sr0jeny9GKbRiGHIDzSoTMZjA+Sbf++o9mraki5JRV
VJh68VZx8svi0cET6Vs/hnGQytv72JGMEHpKB3/WRVsOyQPlhQfftYgWLKNgADnQ
qI6rP7rqM6hd/aapMxU8A8kCgYB/Dqo/2j7INwbQRExC9jDvNEydvWkeS/cja8Zv
BF6T5jh+ONG2Ko8lrwONK0+d+GK4Qpw+Ga94LdfGxjxwCL8VETC5iM2qh2RMQUxF
tgWMMLnSXuF5FgdXYdq6QK+OqCu1YWhHLaw4/YGcy3cW8702d16RPN90dD9yyRek
1FaF3QKBgEDic6rSZOCMxV2CNpPgPSR0KcK01vycyj0V433g0PSoZ+qwbD2qMeZL
w5A2qWaAmzVSVsKrFWhbEN9tFIPPOU6oyEtEW8KdP+lGcf1ks9Y65gGfHzU5sEfb
FYareLdzs2GTluMTGnk4uS1cjT2sQDitLjrOw9YqWa4BmSvdhcW3
-----END RSA PRIVATE KEY-----

23
device1Cert.pem Normal file
View File

@@ -0,0 +1,23 @@
-----BEGIN CERTIFICATE-----
MIID0jCCAroCFFjR75nvGyoFpSn0YFt3YZ0ejZ7GMA0GCSqGSIb3DQEBCwUAMIGQ
MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB01pZGxhbmQx
EzARBgNVBAoMCkhlbnJ5IFB1bXAxEzARBgNVBAsMCkF1dG9tYXRpb24xDjAMBgNV
BAMMBUhQSW9UMSUwIwYJKoZIhvcNAQkBFhZub3JlcGx5QGhlbnJ5LXB1bXAuY29t
MB4XDTIwMDEyMDIwMjQwOFoXDTIxMDExOTIwMjQwOFowgbkxCzAJBgNVBAYTAlVT
MQ4wDAYDVQQHDAVUZXhhczETMBEGA1UECgwKSGVucnkgUHVtcDETMBEGA1UECwwK
QXV0b21hdGlvbjFJMEcGA1UEAwxAZjUyYzliZWQwOTk3YzhmOTJiNDFiYzA4NWMy
MGIwZWFhNDdmYmZhOGY3OGJiODYzMTAwODdhMjRhODcyMTQwMTElMCMGCSqGSIb3
DQEJARYWbm9yZXBseUBoZW5yeS1wdW1wLmNvbTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBALZXtBu/H8kBOJdSKGJ0qe4mlM5haWHNbSZD9ih8Zh8V8JL6
iEvoEWqefNp4HAXHF2hZxuPPqKGoJnL+Q0GmkUmK3Sj+lxTYHfOo8DxA1h1hSoO/
wiVUKTGrs0HfA1CF+4XDLktIw9gQ3wMVSKGQjq09aMOX20W97SipbTeLMyxEZekC
yfPTo0/D/mabltNmacNKYzCki3gnY4wTLGvBC7kUBsM00v/nH15Tdtc3pg81L/1H
aVTR/rWG0Rjv5MX5tZPVlTmYonQEYxv201jJSZe9S4uitJjKhL90LWt1wK4btvxg
tBOyS7T5yjUKJ4nhJpV8jUdupBxHFAIDn8cvyfkCAwEAATANBgkqhkiG9w0BAQsF
AAOCAQEATPlVtR0/I+fy5iSmLKoBexQPC4utffCyppW+onoLCAetpKpCpsyYtb74
FkefqCIyjcpjuKJJNnKVHGUr7hr3L3hDzybTxNu8LUpfioNPlbNjdowi29W3I1MX
2miDwylAL4F5X/hQkmJ8jxdLFdI2obcGqo7vzvryY25BRhT9H5VOcDYNlC/gvaN1
exsv8bIyo1BdwVzcW0ucDRjXbbUNBkMM6J7LLh4X3ZvAxe62CQfrw3pUmeml+bi1
IGSmA0QgJwtH+LVbqHlQfOhQFHrBr8SfrbyqDyqeRG13eaiwjqAczR902IHG1pev
ZOAqwqO3Vaf6yYh80iX3hFDKZ5QN+A==
-----END CERTIFICATE-----

47
device1CertAndCACert.pem Normal file
View File

@@ -0,0 +1,47 @@
-----BEGIN CERTIFICATE-----
MIID0jCCAroCFFjR75nvGyoFpSn0YFt3YZ0ejZ7GMA0GCSqGSIb3DQEBCwUAMIGQ
MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB01pZGxhbmQx
EzARBgNVBAoMCkhlbnJ5IFB1bXAxEzARBgNVBAsMCkF1dG9tYXRpb24xDjAMBgNV
BAMMBUhQSW9UMSUwIwYJKoZIhvcNAQkBFhZub3JlcGx5QGhlbnJ5LXB1bXAuY29t
MB4XDTIwMDEyMDIwMjQwOFoXDTIxMDExOTIwMjQwOFowgbkxCzAJBgNVBAYTAlVT
MQ4wDAYDVQQHDAVUZXhhczETMBEGA1UECgwKSGVucnkgUHVtcDETMBEGA1UECwwK
QXV0b21hdGlvbjFJMEcGA1UEAwxAZjUyYzliZWQwOTk3YzhmOTJiNDFiYzA4NWMy
MGIwZWFhNDdmYmZhOGY3OGJiODYzMTAwODdhMjRhODcyMTQwMTElMCMGCSqGSIb3
DQEJARYWbm9yZXBseUBoZW5yeS1wdW1wLmNvbTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBALZXtBu/H8kBOJdSKGJ0qe4mlM5haWHNbSZD9ih8Zh8V8JL6
iEvoEWqefNp4HAXHF2hZxuPPqKGoJnL+Q0GmkUmK3Sj+lxTYHfOo8DxA1h1hSoO/
wiVUKTGrs0HfA1CF+4XDLktIw9gQ3wMVSKGQjq09aMOX20W97SipbTeLMyxEZekC
yfPTo0/D/mabltNmacNKYzCki3gnY4wTLGvBC7kUBsM00v/nH15Tdtc3pg81L/1H
aVTR/rWG0Rjv5MX5tZPVlTmYonQEYxv201jJSZe9S4uitJjKhL90LWt1wK4btvxg
tBOyS7T5yjUKJ4nhJpV8jUdupBxHFAIDn8cvyfkCAwEAATANBgkqhkiG9w0BAQsF
AAOCAQEATPlVtR0/I+fy5iSmLKoBexQPC4utffCyppW+onoLCAetpKpCpsyYtb74
FkefqCIyjcpjuKJJNnKVHGUr7hr3L3hDzybTxNu8LUpfioNPlbNjdowi29W3I1MX
2miDwylAL4F5X/hQkmJ8jxdLFdI2obcGqo7vzvryY25BRhT9H5VOcDYNlC/gvaN1
exsv8bIyo1BdwVzcW0ucDRjXbbUNBkMM6J7LLh4X3ZvAxe62CQfrw3pUmeml+bi1
IGSmA0QgJwtH+LVbqHlQfOhQFHrBr8SfrbyqDyqeRG13eaiwjqAczR902IHG1pev
ZOAqwqO3Vaf6yYh80iX3hFDKZ5QN+A==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIEAzCCAuugAwIBAgIUFCudUXwBqKUNreGC28n/HyRCLZowDQYJKoZIhvcNAQEL
BQAwgZAxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4GA1UEBwwHTWlk
bGFuZDETMBEGA1UECgwKSGVucnkgUHVtcDETMBEGA1UECwwKQXV0b21hdGlvbjEO
MAwGA1UEAwwFSFBJb1QxJTAjBgkqhkiG9w0BCQEWFm5vcmVwbHlAaGVucnktcHVt
cC5jb20wHhcNMTkxMTIwMTYwMDE3WhcNMjIwOTA5MTYwMDE3WjCBkDELMAkGA1UE
BhMCVVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdNaWRsYW5kMRMwEQYDVQQK
DApIZW5yeSBQdW1wMRMwEQYDVQQLDApBdXRvbWF0aW9uMQ4wDAYDVQQDDAVIUElv
VDElMCMGCSqGSIb3DQEJARYWbm9yZXBseUBoZW5yeS1wdW1wLmNvbTCCASIwDQYJ
KoZIhvcNAQEBBQADggEPADCCAQoCggEBAONzfIpip5r/jQuDH6T5RfETBUQz2fz6
XgmzuMV6cxnpgbL+TTg6XUPmYirTpiT4n+uLzOmv3YlDJwvlci9VTBtrZngrS0iL
/izL1eL2cxIXlT8EVngR+f6JEuYN5ZGYsWrvEf7wJkqpeR99PJwmgoEwWEFDF1Ri
j6A/YuLEmJs8+Ox5ndj7fI7xU/5c2nBCayHpSQEXh9KAMIJ1oi9qAKVgQpczqXLl
h6tzlqyB2eQfSSSch6SjXMJ8z3H8m3QxTiVfk95LX0E16ufF0f5WDTAB6HFdSs3C
9MISDWkzTNt+ayl6WFi2tCHGUHstjrKpwKu0BSRij1FauoCmwIiti5sCAwEAAaNT
MFEwHQYDVR0OBBYEFPS+HjbxdMY+0FyHD8QGdKpYeXFOMB8GA1UdIwQYMBaAFPS+
HjbxdMY+0FyHD8QGdKpYeXFOMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL
BQADggEBAK/rznXdYhm5cTJWfJn7oU1aaU3i0PDD9iL72kRqyaeKY0Be0iUDCXlB
zCnC3RVWD5RCnktU6RhxcvuOJhisOmr+nVDamk93771+D2Dc0ONCEMq6uRFjykYs
iV1V0DOYJ/G1pq9bXaKT9CGsLt0r9DKasy8+Bl/U5//MPYbunDGZO7MwwV9YZXns
BLGWsjlRRQEj2IPeIobygajhBn5KHLIfVp9iI5bg68Zpf0VScKFIzo7wej5bX5xV
hrlX48fFgM/M0Q2zGauVPAiY1aV4FctdmfstEjoaXAlkQQUsCDTdpTjIPrnLLvd1
lqM/pJrHKTd2pLeRpFEtPWWTJt1Sff4=
-----END CERTIFICATE-----

143
driver.py Normal file
View File

@@ -0,0 +1,143 @@
import json
import time
from datetime import datetime as datetime
import logging
from logging.handlers import RotatingFileHandler
import sys
from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient
import threading
from data_points import plcDataPoint,modbusDataPoint,currentDataPoint,voltageDataPoint
def run(config, device, port, host, rootCAPath):
log_formatter = logging.Formatter('%(asctime)s %(levelname)s %(funcName)s(%(lineno)d) %(message)s')
log_file = './logs/{}.log'.format(device)
my_handler = RotatingFileHandler(log_file, mode='a', maxBytes=500*1024, backupCount=2, encoding=None, delay=0)
my_handler.setFormatter(log_formatter)
my_handler.setLevel(logging.INFO)
filelogger = logging.getLogger('{}'.format(device))
filelogger.setLevel(logging.INFO)
filelogger.addHandler(my_handler)
console_out = logging.StreamHandler(sys.stdout)
console_out.setFormatter(log_formatter)
filelogger.addHandler(console_out)
filelogger.info("IN Driver")
filelogger.info("Got Config: \n{}".format(config))
#Extract data from passed in config
app = config['appname']
company = config['company']
field = config['field']
locationID = config['locationID']
#deviceType = config['deviceType']
certificateID = config['certificateID']
#Build a topic and last will payload
dt_topic = "dt/{}/{}/{}/{}".format(app, company, field, locationID)
alm_topic = "alm/{}/{}/{}/{}".format(app,company, field, locationID)
lwtPayload = {"connected": 0}
#Generate a cert if needed
#Configure connection to AWS IoT Core with proper certificate
myAWSIoTMQTTClient = None
myAWSIoTMQTTClient = AWSIoTMQTTClient(certificateID)
myAWSIoTMQTTClient.configureEndpoint(host, port)
myAWSIoTMQTTClient.configureCredentials(rootCAPath, './device1Cert.key', './device1CertAndCACert.pem')
myAWSIoTMQTTClient.configureLastWill(dt_topic,json.dumps(lwtPayload),1)
try:
myAWSIoTMQTTClient.connect()
connectedPayload = {"connected": 1}
myAWSIoTMQTTClient.publish(dt_topic, json.dumps(connectedPayload),1)
except Exception as e:
filelogger.info("Didn't connect: {}".format(e))
#build data points loop through config and use a class to make a data point
#if plcdata != to empty then setup polls for tags
#use ping and reads as watchdog values for connectivity
#if modbusdata != to empty then setup polls for registers
#use reads as watchdog values for connectivity
#if currentdata != to empty then setup polls for current
#if raw current value > 3.5 then current is good else current disconnected
#if voltagedata != to empty then setup polls for voltage
#if raw voltage value > 0 then voltage is good else voltage disconnected
datapoints = []
if not config["PLCData"] == "empty":
for key in config['PLCData'].keys():
changeThreshold = config['PLCData'][key]["changeThreshold"]
guaranteed = config['PLCData'][key]["guaranteed"]
plcIP = config['PLCData'][key]["plcIP"]
plcType = config['PLCData'][key]["plcType"]
tag = config['PLCData'][key]["tag"]
name = config['PLCData'][key]["name"]
if "alert" in config['PLCData'][key].keys():
threshold = config['PLCData'][key]["alert"]["threshold"]
condition = config['PLCData'][key]["alert"]["condition"]
response = config['PLCData'][key]["alert"]["response"]
contact = config['PLCData'][key]["alert"]["contact"]
datapoint = plcDataPoint(changeThreshold,guaranteed,str(name),plcIP=str(plcIP),plcType=str(plcType),tag=str(tag),alertThreshold=threshold,alertCondition=condition,alertResponse=response,alertContact=contact)
else:
datapoint = plcDataPoint(changeThreshold,guaranteed,str(name),plcIP=str(plcIP),plcType=str(plcType),tag=str(tag))
datapoints.append(datapoint)
if not config["modbusData"] == "empty":
pass
if not config["currentData"] == "empty":
pass
if not config["voltageData"] == "empty":
pass
#build alert points
#A function for polling general data can be latent no greater than a min between polls
#loop through list of data points to read and check value changes
#sleep for 30 secs
def dataCollection():
while True:
message = {}
for datapoint in datapoints:
val,alertMessage = datapoint.read()
if alertMessage != None and not datapoint.alerted :
myAWSIoTMQTTClient.publish(alm_topic,json.dumps(alertMessage),1)
datapoint.alerted =True
if datapoint.checkSend(val):
message[datapoint.name] = val
if message:
message["timestamp"] = datetime.now().isoformat()
filelogger.info("Publishing: {}\nTo Topic: {}".format(message,dt_topic))
myAWSIoTMQTTClient.publish(dt_topic, json.dumps(message),1)
time.sleep(5)
#A function for polling alert data should be very near real time
#if plcdata != to empty then setup polls for tags
#use ping and reads as watchdog values for connectivity
#if modbusdata != to empty then setup polls for registers
#use reads as watchdog values for connectivity
#if currentdata != to empty then setup polls for current
#if raw current value > 3.5 then current is good else current disconnected
#if voltagedata != to empty then setup polls for voltage
#if raw voltage value > 0 then voltage is good else voltage disconnected
#sleep for 1 secs
def alertCollection():
pass
#Start a thread for data and a thread for alerts
# list of all threads, so that they can be killed afterwards
all_threads = []
data_thread = threading.Thread(target=dataCollection, args=(), name="Thread-data")
data_thread.start()
all_threads.append(data_thread)
alert_thread = threading.Thread(target=alertCollection, args=(), name="Thread-alerts")
alert_thread.start()
all_threads.append(alert_thread)
for thread in all_threads:
thread.join()
#myAWSIoTMQTTClient.disconnect()

BIN
driver.pyc Normal file

Binary file not shown.

93
logs/device1.log Normal file
View File

@@ -0,0 +1,93 @@
2020-01-20 14:24:51,205 INFO run(26) IN Driver
2020-01-20 14:24:51,206 INFO run(27) Got Config:
{u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': u'empty', u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-20 14:26:34,220 INFO run(26) IN Driver
2020-01-20 14:26:34,222 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': u'empty', u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-20 15:08:28,235 INFO run(26) IN Driver
2020-01-20 15:08:28,236 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': u'empty', u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-20 15:09:18,894 INFO run(26) IN Driver
2020-01-20 15:09:18,895 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-20 15:10:19,977 INFO run(26) IN Driver
2020-01-20 15:10:19,979 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:16:52,980 INFO run(26) IN Driver
2020-01-21 13:16:52,981 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:18:28,723 INFO run(26) IN Driver
2020-01-21 13:18:28,724 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:21:45,694 INFO run(26) IN Driver
2020-01-21 13:21:45,695 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:23:56,621 INFO run(26) IN Driver
2020-01-21 13:23:56,622 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:24:25,281 INFO run(26) IN Driver
2020-01-21 13:24:25,283 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:40:43,424 INFO run(26) IN Driver
2020-01-21 13:40:43,427 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:41:20,835 INFO run(26) IN Driver
2020-01-21 13:41:20,836 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:50:21,215 INFO run(26) IN Driver
2020-01-21 13:50:21,217 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'}
2020-01-21 13:50:21,739 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:21.739073', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:50:26,876 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:26.876741', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:50:32,065 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:32.065283', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:50:37,202 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:37.201957', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:50:42,385 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:42.385094', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:50:47,523 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:47.523263', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:50:52,667 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:52.667452', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:50:57,811 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:57.811198', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:51:02,953 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:51:02.953156', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/henrypump/inventory/0
2020-01-21 13:54:00,990 INFO run(26) IN Driver
2020-01-21 13:54:00,992 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'QEP', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'North', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': u'POE 1', u'currentData': u'empty'}
2020-01-21 13:54:01,514 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:01.514449', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:54:06,701 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:06.701727', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:54:11,840 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:11.840273', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:54:16,969 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:16.969216', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:54:22,110 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:22.109787', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:54:27,253 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:27.253244', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:54:32,392 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:32.392205', 'pond 2 height': -17.29999542236328}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:57:56,108 INFO run(26) IN Driver
2020-01-21 13:57:56,109 INFO run(27) Got Config:
{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'QEP', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'current', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'volumeflow', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'North', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': u'POE 1', u'currentData': u'empty'}
2020-01-21 13:57:56,742 INFO dataCollection(97) Publishing: {'current': 12.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:57:56.742390'}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:58:01,878 INFO dataCollection(97) Publishing: {'current': 12.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:01.878045'}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:58:07,014 INFO dataCollection(97) Publishing: {'current': 15.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:07.013781'}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:58:12,198 INFO dataCollection(97) Publishing: {'current': 15.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:12.198353'}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:58:17,338 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:17.338821'}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:58:22,468 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:22.468762'}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:58:27,608 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:27.608766'}
To Topic: dt/hpiot/QEP/North/POE 1
2020-01-21 13:58:32,749 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:32.749239'}
To Topic: dt/hpiot/QEP/North/POE 1

15
logs/test.log Normal file
View File

@@ -0,0 +1,15 @@
2020-01-21 13:30:40,848 INFO run(26) IN Driver
2020-01-21 13:30:40,849 INFO run(27) Got Config:
{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'currentData': u'empty', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'locationID': 0, u'appname': u'hpiot', u'voltageData': u'empty', u'company': u'henrypump', u'modbusData': u'empty'}
2020-01-21 13:35:19,199 INFO run(26) IN Driver
2020-01-21 13:35:19,201 INFO run(27) Got Config:
{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'modbusData': u'empty', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'currentData': u'empty', u'voltageData': u'empty'}
2020-01-21 13:38:31,119 INFO run(26) IN Driver
2020-01-21 13:38:31,119 INFO run(26) IN Driver
2020-01-21 13:38:31,126 INFO run(27) Got Config:
{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'modbusData': u'empty', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'currentData': u'empty', u'voltageData': u'empty'}
2020-01-21 13:38:31,126 INFO run(27) Got Config:
{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'modbusData': u'empty', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'currentData': u'empty', u'voltageData': u'empty'}
2020-01-21 13:39:34,604 INFO run(26) IN Driver
2020-01-21 13:39:34,605 INFO run(27) Got Config:
{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'currentData': u'empty', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'modbusData': u'empty', u'voltageData': u'empty'}

127
main.py Normal file
View File

@@ -0,0 +1,127 @@
from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient
import logging
import time
import argparse
import json
import os
from datetime import datetime
import urllib
import multiprocessing
import driver
import utilities
def main():
AllowedActions = ['both', 'publish', 'subscribe']
# Custom MQTT message callback
def customCallback(client, userdata, message):
print("Client: ")
print(client)
print("User Data: ")
print(userdata)
print("Received a new message: ")
print(message.payload)
print("from topic: ")
print(message.topic)
print("--------------\n\n")
# Read in command-line parameters
parser = argparse.ArgumentParser()
parser.add_argument("-e", "--endpoint", action="store", required=True, dest="host", help="Your AWS IoT custom endpoint")
parser.add_argument("-r", "--rootCA", action="store", required=True, dest="rootCAPath", help="Root CA file path")
parser.add_argument("-c", "--cert", action="store", dest="certificatePath", help="Certificate file path")
parser.add_argument("-k", "--key", action="store", dest="privateKeyPath", help="Private key file path")
parser.add_argument("-p", "--port", action="store", dest="port", type=int, help="Port number override")
parser.add_argument("-w", "--websocket", action="store_true", dest="useWebsocket", default=False,
help="Use MQTT over WebSocket")
parser.add_argument("-id", "--clientId", action="store", dest="clientId", default="basicPubSub",
help="Targeted client id")
parser.add_argument("-t", "--topic", action="store", dest="topic", default="dt/hpiot/", help="Targeted topic")
parser.add_argument("-m", "--mode", action="store", dest="mode", default="both",
help="Operation modes: %s"%str(AllowedActions))
parser.add_argument("-M", "--message", action="store", dest="message", default="Hello World!",
help="Message to publish")
args = parser.parse_args()
host = args.host
rootCAPath = args.rootCAPath
certificatePath = args.certificatePath
privateKeyPath = args.privateKeyPath
port = args.port
useWebsocket = args.useWebsocket
topic = args.topic
def jitp_registration():
#Attempt to connect to AWS IoT Core and start JITP for given certificate
myAWSIoTMQTTClient = None
myAWSIoTMQTTClient = AWSIoTMQTTClient(certificateID)
myAWSIoTMQTTClient.configureEndpoint(host, port)
myAWSIoTMQTTClient.configureCredentials(rootCAPath, './device1Cert.key', './device1CertAndCACert.pem')
while True:
try:
myAWSIoTMQTTClient.connect()
myAWSIoTMQTTClient.disconnect()
break
except Exception as e:
logger.info("Didn't connect trying again in 10 seconds: {}".format(e))
time.sleep(10)
#Get the config that should be in the database after JITP concludes
return json.load(urllib.urlopen('https://4ax24ru9ra.execute-api.us-east-1.amazonaws.com/Gamma/HPIoTgetConfig/?certificateID={}'.format(certificateID)))
# Configure logging
logger = logging.getLogger("AWSIoTPythonSDK.core")
logger.setLevel(logging.INFO)
streamHandler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
streamHandler.setFormatter(formatter)
logger.addHandler(streamHandler)
#Checking for main device certificate or making it if absent
if not os.path.isfile('./device1Cert.pem'):
os.system('openssl genrsa -out device1Cert.key 2048')
os.system('openssl req -config server.conf -new -key device1Cert.key -out device1Cert.pem')
os.system('openssl x509 -req -in device1Cert.pem -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out device1Cert.pem -days 365 -sha256')
if not os.path.isfile('./device1CertAndCACert.pem'):
os.system('cat device1Cert.pem rootCA.pem > device1CertAndCACert.pem')
certificateID = os.popen('openssl x509 -in device1Cert.pem -outform der | sha256sum').read()[:-4]
#Download the config from dynamodb with API call
logger.info("Attempting to download config file")
config = {}
try:
config = json.load(urllib.urlopen('https://4ax24ru9ra.execute-api.us-east-1.amazonaws.com/Gamma/HPIoTgetConfig/?certificateID={}'.format(certificateID)))
except Exception as e:
logger.error(e)
#No config in database probably haven't been registered attempt to connect and start JITP
if 'certificateID' not in config.keys():
config = jitp_registration()
#config = utilities.unmarshal_dynamodb_json(config)
print(config)
#Get all the device names from the config
devices = [ele for ele in config.keys() if('device' in ele)]
#Build a list of all processes, so that they can be terminated afterwards
all_processes = []
for device in devices:
driver.run(config[device],device,port, host, rootCAPath)
'''
process = multiprocessing.Process(target=driver.run, args=(config[device],device,port, host, rootCAPath), name="Process-{}".format(config[device]['locationID']))
process.start()
all_processes.append(process)
logger.info(all_processes)
for process in all_processes:
if process.exitcode:
process.terminate()
'''
if __name__ == '__main__':
main()

4028
minimalmodbus.py Normal file

File diff suppressed because it is too large Load Diff

BIN
minimalmodbus.pyc Normal file

Binary file not shown.

1
pycomm/__init__.py Normal file
View File

@@ -0,0 +1 @@
__author__ = 'agostino'

BIN
pycomm/__init__.pyc Normal file

Binary file not shown.

View File

@@ -0,0 +1,2 @@
__author__ = 'agostino'
import logging

BIN
pycomm/ab_comm/__init__.pyc Normal file

Binary file not shown.

912
pycomm/ab_comm/clx.py Normal file
View File

@@ -0,0 +1,912 @@
# -*- coding: utf-8 -*-
#
# clx.py - Ethernet/IP Client for Rockwell PLCs
#
#
# Copyright (c) 2014 Agostino Ruscito <ruscito@gmail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
from pycomm.cip.cip_base import *
import logging
try: # Python 2.7+
from logging import NullHandler
except ImportError:
class NullHandler(logging.Handler):
def emit(self, record):
pass
logger = logging.getLogger(__name__)
logger.addHandler(NullHandler())
string_sizes = [82, 12, 16, 20, 40, 8]
class Driver(Base):
"""
This Ethernet/IP client is based on Rockwell specification. Please refer to the link below for details.
http://literature.rockwellautomation.com/idc/groups/literature/documents/pm/1756-pm020_-en-p.pdf
The following services have been implemented:
- Read Tag Service (0x4c)
- Read Tag Fragment Service (0x52)
- Write Tag Service (0x4d)
- Write Tag Fragment Service (0x53)
- Multiple Service Packet (0x0a)
The client has been successfully tested with the following PLCs:
- CompactLogix 5330ERM
- CompactLogix 5370
- ControlLogix 5572 and 1756-EN2T Module
"""
def __init__(self):
super(Driver, self).__init__()
self._buffer = {}
self._get_template_in_progress = False
self.__version__ = '0.2'
def get_last_tag_read(self):
""" Return the last tag read by a multi request read
:return: A tuple (tag name, value, type)
"""
return self._last_tag_read
def get_last_tag_write(self):
""" Return the last tag write by a multi request write
:return: A tuple (tag name, 'GOOD') if the write was successful otherwise (tag name, 'BAD')
"""
return self._last_tag_write
def _parse_instance_attribute_list(self, start_tag_ptr, status):
""" extract the tags list from the message received
:param start_tag_ptr: The point in the message string where the tag list begin
:param status: The status of the message receives
"""
tags_returned = self._reply[start_tag_ptr:]
tags_returned_length = len(tags_returned)
idx = 0
instance = 0
count = 0
try:
while idx < tags_returned_length:
instance = unpack_dint(tags_returned[idx:idx+4])
idx += 4
tag_length = unpack_uint(tags_returned[idx:idx+2])
idx += 2
tag_name = tags_returned[idx:idx+tag_length]
idx += tag_length
symbol_type = unpack_uint(tags_returned[idx:idx+2])
idx += 2
count += 1
self._tag_list.append({'instance_id': instance,
'tag_name': tag_name,
'symbol_type': symbol_type})
except Exception as e:
raise DataError(e)
if status == SUCCESS:
self._last_instance = -1
elif status == 0x06:
self._last_instance = instance + 1
else:
self._status = (1, 'unknown status during _parse_tag_list')
self._last_instance = -1
def _parse_structure_makeup_attributes(self, start_tag_ptr, status):
""" extract the tags list from the message received
:param start_tag_ptr: The point in the message string where the tag list begin
:param status: The status of the message receives
"""
self._buffer = {}
if status != SUCCESS:
self._buffer['Error'] = status
return
attribute = self._reply[start_tag_ptr:]
idx = 4
try:
if unpack_uint(attribute[idx:idx + 2]) == SUCCESS:
idx += 2
self._buffer['object_definition_size'] = unpack_dint(attribute[idx:idx + 4])
else:
self._buffer['Error'] = 'object_definition Error'
return
idx += 6
if unpack_uint(attribute[idx:idx + 2]) == SUCCESS:
idx += 2
self._buffer['structure_size'] = unpack_dint(attribute[idx:idx + 4])
else:
self._buffer['Error'] = 'structure Error'
return
idx += 6
if unpack_uint(attribute[idx:idx + 2]) == SUCCESS:
idx += 2
self._buffer['member_count'] = unpack_uint(attribute[idx:idx + 2])
else:
self._buffer['Error'] = 'member_count Error'
return
idx += 4
if unpack_uint(attribute[idx:idx + 2]) == SUCCESS:
idx += 2
self._buffer['structure_handle'] = unpack_uint(attribute[idx:idx + 2])
else:
self._buffer['Error'] = 'structure_handle Error'
return
return self._buffer
except Exception as e:
raise DataError(e)
def _parse_template(self, start_tag_ptr, status):
""" extract the tags list from the message received
:param start_tag_ptr: The point in the message string where the tag list begin
:param status: The status of the message receives
"""
tags_returned = self._reply[start_tag_ptr:]
bytes_received = len(tags_returned)
self._buffer += tags_returned
if status == SUCCESS:
self._get_template_in_progress = False
elif status == 0x06:
self._byte_offset += bytes_received
else:
self._status = (1, 'unknown status {0} during _parse_template'.format(status))
logger.warning(self._status)
self._last_instance = -1
def _parse_fragment(self, start_ptr, status):
""" parse the fragment returned by a fragment service.
:param start_ptr: Where the fragment start within the replay
:param status: status field used to decide if keep parsing or stop
"""
try:
data_type = unpack_uint(self._reply[start_ptr:start_ptr+2])
fragment_returned = self._reply[start_ptr+2:]
except Exception as e:
raise DataError(e)
fragment_returned_length = len(fragment_returned)
idx = 0
while idx < fragment_returned_length:
try:
typ = I_DATA_TYPE[data_type]
if self._output_raw:
value = fragment_returned[idx:idx+DATA_FUNCTION_SIZE[typ]]
else:
value = UNPACK_DATA_FUNCTION[typ](fragment_returned[idx:idx+DATA_FUNCTION_SIZE[typ]])
idx += DATA_FUNCTION_SIZE[typ]
except Exception as e:
raise DataError(e)
if self._output_raw:
self._tag_list += value
else:
self._tag_list.append((self._last_position, value))
self._last_position += 1
if status == SUCCESS:
self._byte_offset = -1
elif status == 0x06:
self._byte_offset += fragment_returned_length
else:
self._status = (2, '{0}: {1}'.format(SERVICE_STATUS[status], get_extended_status(self._reply, 48)))
logger.warning(self._status)
self._byte_offset = -1
def _parse_multiple_request_read(self, tags):
""" parse the message received from a multi request read:
For each tag parsed, the information extracted includes the tag name, the value read and the data type.
Those information are appended to the tag list as tuple
:return: the tag list
"""
offset = 50
position = 50
try:
number_of_service_replies = unpack_uint(self._reply[offset:offset+2])
tag_list = []
for index in range(number_of_service_replies):
position += 2
start = offset + unpack_uint(self._reply[position:position+2])
general_status = unpack_usint(self._reply[start+2:start+3])
if general_status == 0:
data_type = unpack_uint(self._reply[start+4:start+6])
value_begin = start + 6
value_end = value_begin + DATA_FUNCTION_SIZE[I_DATA_TYPE[data_type]]
value = self._reply[value_begin:value_end]
self._last_tag_read = (tags[index], UNPACK_DATA_FUNCTION[I_DATA_TYPE[data_type]](value),
I_DATA_TYPE[data_type])
else:
self._last_tag_read = (tags[index], None, None)
tag_list.append(self._last_tag_read)
return tag_list
except Exception as e:
raise DataError(e)
def _parse_multiple_request_write(self, tags):
""" parse the message received from a multi request writ:
For each tag parsed, the information extracted includes the tag name and the status of the writing.
Those information are appended to the tag list as tuple
:return: the tag list
"""
offset = 50
position = 50
try:
number_of_service_replies = unpack_uint(self._reply[offset:offset+2])
tag_list = []
for index in range(number_of_service_replies):
position += 2
start = offset + unpack_uint(self._reply[position:position+2])
general_status = unpack_usint(self._reply[start+2:start+3])
if general_status == 0:
self._last_tag_write = (tags[index] + ('GOOD',))
else:
self._last_tag_write = (tags[index] + ('BAD',))
tag_list.append(self._last_tag_write)
return tag_list
except Exception as e:
raise DataError(e)
def _check_reply(self):
""" check the replayed message for error
"""
self._more_packets_available = False
try:
if self._reply is None:
self._status = (3, '%s without reply' % REPLAY_INFO[unpack_dint(self._message[:2])])
return False
# Get the type of command
typ = unpack_uint(self._reply[:2])
# Encapsulation status check
if unpack_dint(self._reply[8:12]) != SUCCESS:
self._status = (3, "{0} reply status:{1}".format(REPLAY_INFO[typ],
SERVICE_STATUS[unpack_dint(self._reply[8:12])]))
return False
# Command Specific Status check
if typ == unpack_uint(ENCAPSULATION_COMMAND["send_rr_data"]):
status = unpack_usint(self._reply[42:43])
if status != SUCCESS:
self._status = (3, "send_rr_data reply:{0} - Extend status:{1}".format(
SERVICE_STATUS[status], get_extended_status(self._reply, 42)))
return False
else:
return True
elif typ == unpack_uint(ENCAPSULATION_COMMAND["send_unit_data"]):
status = unpack_usint(self._reply[48:49])
if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Read Tag Fragmented"]:
self._parse_fragment(50, status)
return True
if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Get Instance Attributes List"]:
self._parse_instance_attribute_list(50, status)
return True
if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Get Attributes"]:
self._parse_structure_makeup_attributes(50, status)
return True
if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Read Template"] and \
self._get_template_in_progress:
self._parse_template(50, status)
return True
if status == 0x06:
self._status = (3, "Insufficient Packet Space")
self._more_packets_available = True
elif status != SUCCESS:
self._status = (3, "send_unit_data reply:{0} - Extend status:{1}".format(
SERVICE_STATUS[status], get_extended_status(self._reply, 48)))
logger.warning(self._status)
return False
else:
return True
return True
except Exception as e:
raise DataError(e)
def read_tag(self, tag):
""" read tag from a connected plc
Possible combination can be passed to this method:
- ('Counts') a single tag name
- (['ControlWord']) a list with one tag or many
- (['parts', 'ControlWord', 'Counts'])
At the moment there is not a strong validation for the argument passed. The user should verify
the correctness of the format passed.
:return: None is returned in case of error otherwise the tag list is returned
"""
self.clear()
multi_requests = False
if isinstance(tag, list):
multi_requests = True
if not self._target_is_connected:
if not self.forward_open():
self._status = (6, "Target did not connected. read_tag will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. read_tag will not be executed.")
if multi_requests:
rp_list = []
for t in tag:
rp = create_tag_rp(t, multi_requests=True)
if rp is None:
self._status = (6, "Cannot create tag {0} request packet. read_tag will not be executed.".format(tag))
raise DataError("Cannot create tag {0} request packet. read_tag will not be executed.".format(tag))
else:
rp_list.append(chr(TAG_SERVICES_REQUEST['Read Tag']) + rp + pack_uint(1))
message_request = build_multiple_service(rp_list, Base._get_sequence())
else:
rp = create_tag_rp(tag)
if rp is None:
self._status = (6, "Cannot create tag {0} request packet. read_tag will not be executed.".format(tag))
return None
else:
# Creating the Message Request Packet
message_request = [
pack_uint(Base._get_sequence()),
chr(TAG_SERVICES_REQUEST['Read Tag']), # the Request Service
chr(len(rp) / 2), # the Request Path Size length in word
rp, # the request path
pack_uint(1)
]
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,
)) is None:
raise DataError("send_unit_data returned not valid data")
if multi_requests:
return self._parse_multiple_request_read(tag)
else:
# Get the data type
if self._status[0] == SUCCESS:
data_type = unpack_uint(self._reply[50:52])
try:
return UNPACK_DATA_FUNCTION[I_DATA_TYPE[data_type]](self._reply[52:]), I_DATA_TYPE[data_type]
except Exception as e:
raise DataError(e)
else:
return None
def read_array(self, tag, counts, raw=False):
""" read array of atomic data type from a connected plc
At the moment there is not a strong validation for the argument passed. The user should verify
the correctness of the format passed.
:param tag: the name of the tag to read
:param counts: the number of element to read
:param raw: the value should output as raw-value (hex)
:return: None is returned in case of error otherwise the tag list is returned
"""
self.clear()
if not self._target_is_connected:
if not self.forward_open():
self._status = (7, "Target did not connected. read_tag will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. read_tag will not be executed.")
self._byte_offset = 0
self._last_position = 0
self._output_raw = raw
if self._output_raw:
self._tag_list = ''
else:
self._tag_list = []
while self._byte_offset != -1:
rp = create_tag_rp(tag)
if rp is None:
self._status = (7, "Cannot create tag {0} request packet. read_tag will not be executed.".format(tag))
return None
else:
# Creating the Message Request Packet
message_request = [
pack_uint(Base._get_sequence()),
chr(TAG_SERVICES_REQUEST["Read Tag Fragmented"]), # the Request Service
chr(len(rp) / 2), # the Request Path Size length in word
rp, # the request path
pack_uint(counts),
pack_dint(self._byte_offset)
]
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,
)) is None:
raise DataError("send_unit_data returned not valid data")
return self._tag_list
def write_tag(self, tag, value=None, typ=None):
""" write tag/tags from a connected plc
Possible combination can be passed to this method:
- ('tag name', Value, data type) as single parameters or inside a tuple
- ([('tag name', Value, data type), ('tag name2', Value, data type)]) as array of tuples
At the moment there is not a strong validation for the argument passed. The user should verify
the correctness of the format passed.
The type accepted are:
- BOOL
- SINT
- INT'
- DINT
- REAL
- LINT
- BYTE
- WORD
- DWORD
- LWORD
:param tag: tag name, or an array of tuple containing (tag name, value, data type)
:param value: the value to write or none if tag is an array of tuple or a tuple
:param typ: the type of the tag to write or none if tag is an array of tuple or a tuple
:return: None is returned in case of error otherwise the tag list is returned
"""
self.clear() # cleanup error string
multi_requests = False
if isinstance(tag, list):
multi_requests = True
if not self._target_is_connected:
if not self.forward_open():
self._status = (8, "Target did not connected. write_tag will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. write_tag will not be executed.")
if multi_requests:
rp_list = []
tag_to_remove = []
idx = 0
for name, value, typ in tag:
# Create the request path to wrap the tag name
rp = create_tag_rp(name, multi_requests=True)
if rp is None:
self._status = (8, "Cannot create tag{0} req. packet. write_tag will not be executed".format(tag))
return None
else:
try: # Trying to add the rp to the request path list
val = PACK_DATA_FUNCTION[typ](value)
rp_list.append(
chr(TAG_SERVICES_REQUEST['Write Tag'])
+ rp
+ pack_uint(S_DATA_TYPE[typ])
+ pack_uint(1)
+ val
)
idx += 1
except (LookupError, struct.error) as e:
self._status = (8, "Tag:{0} type:{1} removed from write list. Error:{2}.".format(name, typ, e))
# The tag in idx position need to be removed from the rp list because has some kind of error
tag_to_remove.append(idx)
# Remove the tags that have not been inserted in the request path list
for position in tag_to_remove:
del tag[position]
# Create the message request
message_request = build_multiple_service(rp_list, Base._get_sequence())
else:
if isinstance(tag, tuple):
name, value, typ = tag
else:
name = tag
rp = create_tag_rp(name)
if rp is None:
self._status = (8, "Cannot create tag {0} request packet. write_tag will not be executed.".format(tag))
logger.warning(self._status)
return None
else:
# Creating the Message Request Packet
message_request = [
pack_uint(Base._get_sequence()),
chr(TAG_SERVICES_REQUEST["Write Tag"]), # the Request Service
chr(len(rp) / 2), # the Request Path Size length in word
rp, # the request path
pack_uint(S_DATA_TYPE[typ]), # data type
pack_uint(1), # Add the number of tag to write
PACK_DATA_FUNCTION[typ](value)
]
ret_val = self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,
)
)
if multi_requests:
return self._parse_multiple_request_write(tag)
else:
if ret_val is None:
raise DataError("send_unit_data returned not valid data")
return ret_val
def write_array(self, tag, values, data_type, raw=False):
""" write array of atomic data type from a connected plc
At the moment there is not a strong validation for the argument passed. The user should verify
the correctness of the format passed.
:param tag: the name of the tag to read
:param data_type: the type of tag to write
:param values: the array of values to write, if raw: the frame with bytes
:param raw: indicates that the values are given as raw values (hex)
"""
self.clear()
if not isinstance(values, list):
self._status = (9, "A list of tags must be passed to write_array.")
logger.warning(self._status)
raise DataError("A list of tags must be passed to write_array.")
if not self._target_is_connected:
if not self.forward_open():
self._status = (9, "Target did not connected. write_array will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. write_array will not be executed.")
array_of_values = ""
byte_size = 0
byte_offset = 0
for i, value in enumerate(values):
if raw:
array_of_values += value
else:
array_of_values += PACK_DATA_FUNCTION[data_type](value)
byte_size += DATA_FUNCTION_SIZE[data_type]
if byte_size >= 450 or i == len(values)-1:
# create the message and send the fragment
rp = create_tag_rp(tag)
if rp is None:
self._status = (9, "Cannot create tag {0} request packet. \
write_array will not be executed.".format(tag))
return None
else:
# Creating the Message Request Packet
message_request = [
pack_uint(Base._get_sequence()),
chr(TAG_SERVICES_REQUEST["Write Tag Fragmented"]), # the Request Service
chr(len(rp) / 2), # the Request Path Size length in word
rp, # the request path
pack_uint(S_DATA_TYPE[data_type]), # Data type to write
pack_uint(len(values)), # Number of elements to write
pack_dint(byte_offset),
array_of_values # Fragment of elements to write
]
byte_offset += byte_size
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,
)) is None:
raise DataError("send_unit_data returned not valid data")
array_of_values = ""
byte_size = 0
def _get_instance_attribute_list_service(self):
""" Step 1: Finding user-created controller scope tags in a Logix5000 controller
This service returns instance IDs for each created instance of the symbol class, along with a list
of the attribute data associated with the requested attribute
"""
try:
if not self._target_is_connected:
if not self.forward_open():
self._status = (10, "Target did not connected. get_tag_list will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. get_tag_list will not be executed.")
self._last_instance = 0
self._get_template_in_progress = True
while self._last_instance != -1:
# Creating the Message Request Packet
message_request = [
pack_uint(Base._get_sequence()),
chr(TAG_SERVICES_REQUEST['Get Instance Attributes List']), # STEP 1
# the Request Path Size length in word
chr(3),
# Request Path ( 20 6B 25 00 Instance )
CLASS_ID["8-bit"], # Class id = 20 from spec 0x20
CLASS_CODE["Symbol Object"], # Logical segment: Symbolic Object 0x6B
INSTANCE_ID["16-bit"], # Instance Segment: 16 Bit instance 0x25
'\x00',
pack_uint(self._last_instance), # The instance
# Request Data
pack_uint(2), # Number of attributes to retrieve
pack_uint(1), # Attribute 1: Symbol name
pack_uint(2) # Attribute 2: Symbol type
]
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,
)) is None:
raise DataError("send_unit_data returned not valid data")
self._get_template_in_progress = False
except Exception as e:
raise DataError(e)
def _get_structure_makeup(self, instance_id):
"""
get the structure makeup for a specific structure
"""
if not self._target_is_connected:
if not self.forward_open():
self._status = (10, "Target did not connected. get_tag_list will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. get_tag_list will not be executed.")
message_request = [
pack_uint(self._get_sequence()),
chr(TAG_SERVICES_REQUEST['Get Attributes']),
chr(3), # Request Path ( 20 6B 25 00 Instance )
CLASS_ID["8-bit"], # Class id = 20 from spec 0x20
CLASS_CODE["Template Object"], # Logical segment: Template Object 0x6C
INSTANCE_ID["16-bit"], # Instance Segment: 16 Bit instance 0x25
'\x00',
pack_uint(instance_id),
pack_uint(4), # Number of attributes
pack_uint(4), # Template Object Definition Size UDINT
pack_uint(5), # Template Structure Size UDINT
pack_uint(2), # Template Member Count UINT
pack_uint(1) # Structure Handle We can use this to read and write UINT
]
if self.send_unit_data(
build_common_packet_format(DATA_ITEM['Connected'],
''.join(message_request), ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,)) is None:
raise DataError("send_unit_data returned not valid data")
return self._buffer
def _read_template(self, instance_id, object_definition_size):
""" get a list of the tags in the plc
"""
if not self._target_is_connected:
if not self.forward_open():
self._status = (10, "Target did not connected. get_tag_list will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. get_tag_list will not be executed.")
self._byte_offset = 0
self._buffer = ""
self._get_template_in_progress = True
try:
while self._get_template_in_progress:
# Creating the Message Request Packet
message_request = [
pack_uint(self._get_sequence()),
chr(TAG_SERVICES_REQUEST['Read Template']),
chr(3), # Request Path ( 20 6B 25 00 Instance )
CLASS_ID["8-bit"], # Class id = 20 from spec 0x20
CLASS_CODE["Template Object"], # Logical segment: Template Object 0x6C
INSTANCE_ID["16-bit"], # Instance Segment: 16 Bit instance 0x25
'\x00',
pack_uint(instance_id),
pack_dint(self._byte_offset), # Offset
pack_uint(((object_definition_size * 4)-23) - self._byte_offset)
]
if not self.send_unit_data(
build_common_packet_format(DATA_ITEM['Connected'], ''.join(message_request),
ADDRESS_ITEM['Connection Based'], addr_data=self._target_cid,)):
raise DataError("send_unit_data returned not valid data")
self._get_template_in_progress = False
return self._buffer
except Exception as e:
raise DataError(e)
def _isolating_user_tag(self):
try:
lst = self._tag_list
self._tag_list = []
for tag in lst:
if tag['tag_name'].find(':') != -1 or tag['tag_name'].find('__') != -1:
continue
if tag['symbol_type'] & 0b0001000000000000:
continue
dimension = (tag['symbol_type'] & 0b0110000000000000) >> 13
if tag['symbol_type'] & 0b1000000000000000 :
template_instance_id = tag['symbol_type'] & 0b0000111111111111
tag_type = 'struct'
data_type = 'user-created'
self._tag_list.append({'instance_id': tag['instance_id'],
'template_instance_id': template_instance_id,
'tag_name': tag['tag_name'],
'dim': dimension,
'tag_type': tag_type,
'data_type': data_type,
'template': {},
'udt': {}})
else:
tag_type = 'atomic'
datatype = tag['symbol_type'] & 0b0000000011111111
data_type = I_DATA_TYPE[datatype]
if datatype == 0xc1:
bit_position = (tag['symbol_type'] & 0b0000011100000000) >> 8
self._tag_list.append({'instance_id': tag['instance_id'],
'tag_name': tag['tag_name'],
'dim': dimension,
'tag_type': tag_type,
'data_type': data_type,
'bit_position' : bit_position})
else:
self._tag_list.append({'instance_id': tag['instance_id'],
'tag_name': tag['tag_name'],
'dim': dimension,
'tag_type': tag_type,
'data_type': data_type})
except Exception as e:
raise DataError(e)
def _parse_udt_raw(self, tag):
try:
buff = self._read_template(tag['template_instance_id'], tag['template']['object_definition_size'])
member_count = tag['template']['member_count']
names = buff.split('\00')
lst = []
tag['udt']['name'] = 'Not an user defined structure'
for name in names:
if len(name) > 1:
if name.find(';') != -1:
tag['udt']['name'] = name[:name.find(';')]
elif name.find('ZZZZZZZZZZ') != -1:
continue
elif name.isalpha():
lst.append(name)
else:
continue
tag['udt']['internal_tags'] = lst
type_list = []
for i in xrange(member_count):
# skip member 1
if i != 0:
array_size = unpack_uint(buff[:2])
try:
data_type = I_DATA_TYPE[unpack_uint(buff[2:4])]
except Exception:
data_type = "None"
offset = unpack_dint(buff[4:8])
type_list.append((array_size, data_type, offset))
buff = buff[8:]
tag['udt']['data_type'] = type_list
except Exception as e:
raise DataError(e)
def get_tag_list(self):
self._tag_list = []
# Step 1
self._get_instance_attribute_list_service()
# Step 2
self._isolating_user_tag()
# Step 3
for tag in self._tag_list:
if tag['tag_type'] == 'struct':
tag['template'] = self._get_structure_makeup(tag['template_instance_id'])
for idx, tag in enumerate(self._tag_list):
# print (tag)
if tag['tag_type'] == 'struct':
self._parse_udt_raw(tag)
# Step 4
return self._tag_list
def write_string(self, tag, value, size=82):
"""
Rockwell define different string size:
STRING STRING_12 STRING_16 STRING_20 STRING_40 STRING_8
by default we assume size 82 (STRING)
"""
if size not in string_sizes:
raise DataError("String size is incorrect")
data_tag = ".".join((tag, "DATA"))
len_tag = ".".join((tag, "LEN"))
# create an empty array
data_to_send = [0] * size
for idx, val in enumerate(value):
data_to_send[idx] = ord(val)
self.write_tag(len_tag, len(value), 'DINT')
self.write_array(data_tag, data_to_send, 'SINT')
def read_string(self, tag):
data_tag = ".".join((tag, "DATA"))
len_tag = ".".join((tag, "LEN"))
length = self.read_tag(len_tag)
values = self.read_array(data_tag, length[0])
values = zip(*values)[1] #[val[1] for val in values]
char_array = [chr(ch) for ch in values]
return ''.join(char_array)

BIN
pycomm/ab_comm/clx.pyc Normal file

Binary file not shown.

574
pycomm/ab_comm/slc.py Normal file
View File

@@ -0,0 +1,574 @@
# -*- coding: utf-8 -*-
#
# clx.py - Ethernet/IP Client for Rockwell PLCs
#
#
# Copyright (c) 2014 Agostino Ruscito <ruscito@gmail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
from pycomm.cip.cip_base import *
import re
import math
#import binascii
import logging
try: # Python 2.7+
from logging import NullHandler
except ImportError:
class NullHandler(logging.Handler):
def emit(self, record):
pass
logger = logging.getLogger(__name__)
logger.addHandler(NullHandler())
def parse_tag(tag):
t = re.search(r"(?P<file_type>[CT])(?P<file_number>\d{1,3})"
r"(:)(?P<element_number>\d{1,3})"
r"(.)(?P<sub_element>ACC|PRE|EN|DN|TT|CU|CD|DN|OV|UN|UA)", tag, flags=re.IGNORECASE)
if t:
if (1 <= int(t.group('file_number')) <= 255) \
and (0 <= int(t.group('element_number')) <= 255):
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': t.group('file_number'),
'element_number': t.group('element_number'),
'sub_element': PCCC_CT[t.group('sub_element').upper()],
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 3}
t = re.search(r"(?P<file_type>[LFBN])(?P<file_number>\d{1,3})"
r"(:)(?P<element_number>\d{1,3})"
r"(/(?P<sub_element>\d{1,2}))?",
tag, flags=re.IGNORECASE)
if t:
if t.group('sub_element') is not None:
if (1 <= int(t.group('file_number')) <= 255) \
and (0 <= int(t.group('element_number')) <= 255) \
and (0 <= int(t.group('sub_element')) <= 15):
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': t.group('file_number'),
'element_number': t.group('element_number'),
'sub_element': t.group('sub_element'),
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 3}
else:
if (1 <= int(t.group('file_number')) <= 255) \
and (0 <= int(t.group('element_number')) <= 255):
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': t.group('file_number'),
'element_number': t.group('element_number'),
'sub_element': t.group('sub_element'),
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 2}
t = re.search(r"(?P<file_type>[IO])(:)(?P<file_number>\d{1,3})"
r"(.)(?P<element_number>\d{1,3})"
r"(/(?P<sub_element>\d{1,2}))?", tag, flags=re.IGNORECASE)
if t:
if t.group('sub_element') is not None:
if (0 <= int(t.group('file_number')) <= 255) \
and (0 <= int(t.group('element_number')) <= 255) \
and (0 <= int(t.group('sub_element')) <= 15):
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': t.group('file_number'),
'element_number': t.group('element_number'),
'sub_element': t.group('sub_element'),
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 3}
else:
if (0 <= int(t.group('file_number')) <= 255) \
and (0 <= int(t.group('element_number')) <= 255):
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': t.group('file_number'),
'element_number': t.group('element_number'),
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 2}
t = re.search(r"(?P<file_type>S)"
r"(:)(?P<element_number>\d{1,3})"
r"(/(?P<sub_element>\d{1,2}))?", tag, flags=re.IGNORECASE)
if t:
if t.group('sub_element') is not None:
if (0 <= int(t.group('element_number')) <= 255) \
and (0 <= int(t.group('sub_element')) <= 15):
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': '2',
'element_number': t.group('element_number'),
'sub_element': t.group('sub_element'),
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 3}
else:
if 0 <= int(t.group('element_number')) <= 255:
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': '2',
'element_number': t.group('element_number'),
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 2}
t = re.search(r"(?P<file_type>B)(?P<file_number>\d{1,3})"
r"(/)(?P<element_number>\d{1,4})",
tag, flags=re.IGNORECASE)
if t:
if (1 <= int(t.group('file_number')) <= 255) \
and (0 <= int(t.group('element_number')) <= 4095):
bit_position = int(t.group('element_number'))
element_number = bit_position / 16
sub_element = bit_position - (element_number * 16)
return True, t.group(0), {'file_type': t.group('file_type').upper(),
'file_number': t.group('file_number'),
'element_number': element_number,
'sub_element': sub_element,
'read_func': '\xa2',
'write_func': '\xab',
'address_field': 3}
return False, tag
class Driver(Base):
"""
SLC/PLC_5 Implementation
"""
def __init__(self):
super(Driver, self).__init__()
self.__version__ = '0.1'
self._last_sequence = 0
def _check_reply(self):
"""
check the replayed message for error
"""
self._more_packets_available = False
try:
if self._reply is None:
self._status = (3, '%s without reply' % REPLAY_INFO[unpack_dint(self._message[:2])])
return False
# Get the type of command
typ = unpack_uint(self._reply[:2])
# Encapsulation status check
if unpack_dint(self._reply[8:12]) != SUCCESS:
self._status = (3, "{0} reply status:{1}".format(REPLAY_INFO[typ],
SERVICE_STATUS[unpack_dint(self._reply[8:12])]))
return False
# Command Specific Status check
if typ == unpack_uint(ENCAPSULATION_COMMAND["send_rr_data"]):
status = unpack_usint(self._reply[42:43])
if status != SUCCESS:
self._status = (3, "send_rr_data reply:{0} - Extend status:{1}".format(
SERVICE_STATUS[status], get_extended_status(self._reply, 42)))
return False
else:
return True
elif typ == unpack_uint(ENCAPSULATION_COMMAND["send_unit_data"]):
status = unpack_usint(self._reply[48:49])
if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Read Tag Fragmented"]:
self._parse_fragment(50, status)
return True
if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Get Instance Attributes List"]:
self._parse_tag_list(50, status)
return True
if status == 0x06:
self._status = (3, "Insufficient Packet Space")
self._more_packets_available = True
elif status != SUCCESS:
self._status = (3, "send_unit_data reply:{0} - Extend status:{1}".format(
SERVICE_STATUS[status], get_extended_status(self._reply, 48)))
return False
else:
return True
return True
except Exception as e:
raise DataError(e)
def __queue_data_available(self, queue_number):
""" read the queue
Possible combination can be passed to this method:
print c.read_tag('F8:0', 3) return a list of 3 registers starting from F8:0
print c.read_tag('F8:0') return one value
It is possible to read status bit
:return: None is returned in case of error
"""
# Creating the Message Request Packet
self._last_sequence = pack_uint(Base._get_sequence())
# PCCC_Cmd_Rd_w3_Q2 = [0x0f, 0x00, 0x30, 0x00, 0xa2, 0x6d, 0x00, 0xa5, 0x02, 0x00]
message_request = [
self._last_sequence,
'\x4b',
'\x02',
CLASS_ID["8-bit"],
PATH["PCCC"],
'\x07',
self.attribs['vid'],
self.attribs['vsn'],
'\x0f',
'\x00',
self._last_sequence[1],
self._last_sequence[0],
'\xa2', # protected typed logical read with three address fields FNC
'\x6d', # Byte size to read = 109
'\x00', # File Number
'\xa5', # File Type
pack_uint(queue_number)
]
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,)):
sts = int(unpack_uint(self._reply[2:4]))
if sts == 146:
return True
else:
return False
else:
raise DataError("read_queue [send_unit_data] returned not valid data")
def __save_record(self, filename):
with open(filename, "a") as csv_file:
logger.debug("SLC __save_record read:{0}".format(self._reply[61:]))
csv_file.write(self._reply[61:]+'\n')
csv_file.close()
def __get_queue_size(self, queue_number):
""" get queue size
"""
# Creating the Message Request Packet
self._last_sequence = pack_uint(Base._get_sequence())
message_request = [
self._last_sequence,
'\x4b',
'\x02',
CLASS_ID["8-bit"],
PATH["PCCC"],
'\x07',
self.attribs['vid'],
self.attribs['vsn'],
'\x0f',
'\x00',
self._last_sequence[1],
self._last_sequence[0],
# '\x30',
# '\x00',
'\xa1', # FNC to get the queue size
'\x06', # Byte size to read = 06
'\x00', # File Number
'\xea', # File Type ????
'\xff', # File Type ????
pack_uint(queue_number)
]
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,)):
sts = int(unpack_uint(self._reply[65:67]))
logger.debug("SLC __get_queue_size({0}) returned {1}".format(queue_number, sts))
return sts
else:
raise DataError("read_queue [send_unit_data] returned not valid data")
def read_queue(self, queue_number, file_name):
""" read the queue
"""
if not self._target_is_connected:
if not self.forward_open():
self._status = (5, "Target did not connected. is_queue_available will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. is_queue_available will not be executed.")
if self.__queue_data_available(queue_number):
logger.debug("SLC read_queue: Queue {0} has data".format(queue_number))
self.__save_record(file_name)
size = self.__get_queue_size(queue_number)
if size > 0:
for i in range(0, size):
if self.__queue_data_available(queue_number):
self.__save_record(file_name)
logger.debug("SLC read_queue: {0} record extract from queue {1}".format(size, queue_number))
else:
logger.debug("SLC read_queue: Queue {0} has no data".format(queue_number))
def read_tag(self, tag, n=1):
""" read tag from a connected plc
Possible combination can be passed to this method:
print c.read_tag('F8:0', 3) return a list of 3 registers starting from F8:0
print c.read_tag('F8:0') return one value
It is possible to read status bit
:return: None is returned in case of error
"""
res = parse_tag(tag)
if not res[0]:
self._status = (1000, "Error parsing the tag passed to read_tag({0},{1})".format(tag, n))
logger.warning(self._status)
raise DataError("Error parsing the tag passed to read_tag({0},{1})".format(tag, n))
bit_read = False
bit_position = 0
sub_element = 0
if int(res[2]['address_field'] == 3):
bit_read = True
bit_position = int(res[2]['sub_element'])
if not self._target_is_connected:
if not self.forward_open():
self._status = (5, "Target did not connected. read_tag will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. read_tag will not be executed.")
data_size = PCCC_DATA_SIZE[res[2]['file_type']]
# Creating the Message Request Packet
self._last_sequence = pack_uint(Base._get_sequence())
message_request = [
self._last_sequence,
'\x4b',
'\x02',
CLASS_ID["8-bit"],
PATH["PCCC"],
'\x07',
self.attribs['vid'],
self.attribs['vsn'],
'\x0f',
'\x00',
self._last_sequence[1],
self._last_sequence[0],
res[2]['read_func'],
pack_usint(data_size * n),
pack_usint(int(res[2]['file_number'])),
PCCC_DATA_TYPE[res[2]['file_type']],
pack_usint(int(res[2]['element_number'])),
pack_usint(sub_element)
]
logger.debug("SLC read_tag({0},{1})".format(tag, n))
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request),
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,)):
sts = int(unpack_usint(self._reply[58]))
try:
if sts != 0:
sts_txt = PCCC_ERROR_CODE[sts]
self._status = (1000, "Error({0}) returned from read_tag({1},{2})".format(sts_txt, tag, n))
logger.warning(self._status)
raise DataError("Error({0}) returned from read_tag({1},{2})".format(sts_txt, tag, n))
new_value = 61
if bit_read:
if res[2]['file_type'] == 'T' or res[2]['file_type'] == 'C':
if bit_position == PCCC_CT['PRE']:
return UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']](
self._reply[new_value+2:new_value+2+data_size])
elif bit_position == PCCC_CT['ACC']:
return UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']](
self._reply[new_value+4:new_value+4+data_size])
tag_value = UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']](
self._reply[new_value:new_value+data_size])
return get_bit(tag_value, bit_position)
else:
values_list = []
while len(self._reply[new_value:]) >= data_size:
values_list.append(
UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']](self._reply[new_value:new_value+data_size])
)
new_value = new_value+data_size
if len(values_list) > 1:
return values_list
else:
return values_list[0]
except Exception as e:
self._status = (1000, "Error({0}) parsing the data returned from read_tag({1},{2})".format(e, tag, n))
logger.warning(self._status)
raise DataError("Error({0}) parsing the data returned from read_tag({1},{2})".format(e, tag, n))
else:
raise DataError("send_unit_data returned not valid data")
def write_tag(self, tag, value):
""" write tag from a connected plc
Possible combination can be passed to this method:
c.write_tag('N7:0', [-30, 32767, -32767])
c.write_tag('N7:0', 21)
c.read_tag('N7:0', 10)
It is not possible to write status bit
:return: None is returned in case of error
"""
res = parse_tag(tag)
if not res[0]:
self._status = (1000, "Error parsing the tag passed to read_tag({0},{1})".format(tag, value))
logger.warning(self._status)
raise DataError("Error parsing the tag passed to read_tag({0},{1})".format(tag, value))
if isinstance(value, list) and int(res[2]['address_field'] == 3):
self._status = (1000, "Function's parameters error. read_tag({0},{1})".format(tag, value))
logger.warning(self._status)
raise DataError("Function's parameters error. read_tag({0},{1})".format(tag, value))
if isinstance(value, list) and int(res[2]['address_field'] == 3):
self._status = (1000, "Function's parameters error. read_tag({0},{1})".format(tag, value))
logger.warning(self._status)
raise DataError("Function's parameters error. read_tag({0},{1})".format(tag, value))
bit_field = False
bit_position = 0
sub_element = 0
if int(res[2]['address_field'] == 3):
bit_field = True
bit_position = int(res[2]['sub_element'])
values_list = ''
else:
values_list = '\xff\xff'
multi_requests = False
if isinstance(value, list):
multi_requests = True
if not self._target_is_connected:
if not self.forward_open():
self._status = (1000, "Target did not connected. write_tag will not be executed.")
logger.warning(self._status)
raise DataError("Target did not connected. write_tag will not be executed.")
try:
n = 0
if multi_requests:
data_size = PCCC_DATA_SIZE[res[2]['file_type']]
for v in value:
values_list += PACK_PCCC_DATA_FUNCTION[res[2]['file_type']](v)
n += 1
else:
n = 1
if bit_field:
data_size = 2
if (res[2]['file_type'] == 'T' or res[2]['file_type'] == 'C') \
and (bit_position == PCCC_CT['PRE'] or bit_position == PCCC_CT['ACC']):
sub_element = bit_position
values_list = '\xff\xff' + PACK_PCCC_DATA_FUNCTION[res[2]['file_type']](value)
else:
sub_element = 0
if value > 0:
values_list = pack_uint(math.pow(2, bit_position)) + pack_uint(math.pow(2, bit_position))
else:
values_list = pack_uint(math.pow(2, bit_position)) + pack_uint(0)
else:
values_list += PACK_PCCC_DATA_FUNCTION[res[2]['file_type']](value)
data_size = PCCC_DATA_SIZE[res[2]['file_type']]
except Exception as e:
self._status = (1000, "Error({0}) packing the values to write to the"
"SLC write_tag({1},{2})".format(e, tag, value))
logger.warning(self._status)
raise DataError("Error({0}) packing the values to write to the "
"SLC write_tag({1},{2})".format(e, tag, value))
data_to_write = values_list
# Creating the Message Request Packet
self._last_sequence = pack_uint(Base._get_sequence())
message_request = [
self._last_sequence,
'\x4b',
'\x02',
CLASS_ID["8-bit"],
PATH["PCCC"],
'\x07',
self.attribs['vid'],
self.attribs['vsn'],
'\x0f',
'\x00',
self._last_sequence[1],
self._last_sequence[0],
res[2]['write_func'],
pack_usint(data_size * n),
pack_usint(int(res[2]['file_number'])),
PCCC_DATA_TYPE[res[2]['file_type']],
pack_usint(int(res[2]['element_number'])),
pack_usint(sub_element)
]
logger.debug("SLC write_tag({0},{1})".format(tag, value))
if self.send_unit_data(
build_common_packet_format(
DATA_ITEM['Connected'],
''.join(message_request) + data_to_write,
ADDRESS_ITEM['Connection Based'],
addr_data=self._target_cid,)):
sts = int(unpack_usint(self._reply[58]))
try:
if sts != 0:
sts_txt = PCCC_ERROR_CODE[sts]
self._status = (1000, "Error({0}) returned from SLC write_tag({1},{2})".format(sts_txt, tag, value))
logger.warning(self._status)
raise DataError("Error({0}) returned from SLC write_tag({1},{2})".format(sts_txt, tag, value))
return True
except Exception as e:
self._status = (1000, "Error({0}) parsing the data returned from "
"SLC write_tag({1},{2})".format(e, tag, value))
logger.warning(self._status)
raise DataError("Error({0}) parsing the data returned from "
"SLC write_tag({1},{2})".format(e, tag, value))
else:
raise DataError("send_unit_data returned not valid data")

1
pycomm/cip/__init__.py Normal file
View File

@@ -0,0 +1 @@
__author__ = 'agostino'

BIN
pycomm/cip/__init__.pyc Normal file

Binary file not shown.

896
pycomm/cip/cip_base.py Normal file
View File

@@ -0,0 +1,896 @@
# -*- coding: utf-8 -*-
#
# cip_base.py - A set of classes methods and structures used to implement Ethernet/IP
#
#
# Copyright (c) 2014 Agostino Ruscito <ruscito@gmail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
import struct
import socket
import random
from os import getpid
from pycomm.cip.cip_const import *
from pycomm.common import PycommError
import logging
try: # Python 2.7+
from logging import NullHandler
except ImportError:
class NullHandler(logging.Handler):
def emit(self, record):
pass
logger = logging.getLogger(__name__)
logger.addHandler(NullHandler())
class CommError(PycommError):
pass
class DataError(PycommError):
pass
def pack_sint(n):
return struct.pack('b', n)
def pack_usint(n):
return struct.pack('B', n)
def pack_int(n):
"""pack 16 bit into 2 bytes little endian"""
return struct.pack('<h', n)
def pack_uint(n):
"""pack 16 bit into 2 bytes little endian"""
return struct.pack('<H', n)
def pack_dint(n):
"""pack 32 bit into 4 bytes little endian"""
return struct.pack('<i', n)
def pack_real(r):
"""unpack 4 bytes little endian to int"""
return struct.pack('<f', r)
def pack_lint(l):
"""unpack 4 bytes little endian to int"""
return struct.pack('<q', l)
def unpack_bool(st):
if not (int(struct.unpack('B', st[0])[0]) == 0):
return 1
return 0
def unpack_sint(st):
return int(struct.unpack('b', st[0])[0])
def unpack_usint(st):
return int(struct.unpack('B', st[0])[0])
def unpack_int(st):
"""unpack 2 bytes little endian to int"""
return int(struct.unpack('<h', st[0:2])[0])
def unpack_uint(st):
"""unpack 2 bytes little endian to int"""
return int(struct.unpack('<H', st[0:2])[0])
def unpack_dint(st):
"""unpack 4 bytes little endian to int"""
return int(struct.unpack('<i', st[0:4])[0])
def unpack_real(st):
"""unpack 4 bytes little endian to int"""
return float(struct.unpack('<f', st[0:4])[0])
def unpack_lint(st):
"""unpack 4 bytes little endian to int"""
return int(struct.unpack('<q', st[0:8])[0])
def get_bit(value, idx):
""":returns value of bit at position idx"""
return (value & (1 << idx)) != 0
PACK_DATA_FUNCTION = {
'BOOL': pack_sint,
'SINT': pack_sint, # Signed 8-bit integer
'INT': pack_int, # Signed 16-bit integer
'UINT': pack_uint, # Unsigned 16-bit integer
'USINT': pack_usint, # Unsigned Byte Integer
'DINT': pack_dint, # Signed 32-bit integer
'REAL': pack_real, # 32-bit floating point
'LINT': pack_lint,
'BYTE': pack_sint, # byte string 8-bits
'WORD': pack_uint, # byte string 16-bits
'DWORD': pack_dint, # byte string 32-bits
'LWORD': pack_lint # byte string 64-bits
}
UNPACK_DATA_FUNCTION = {
'BOOL': unpack_bool,
'SINT': unpack_sint, # Signed 8-bit integer
'INT': unpack_int, # Signed 16-bit integer
'UINT': unpack_uint, # Unsigned 16-bit integer
'USINT': unpack_usint, # Unsigned Byte Integer
'DINT': unpack_dint, # Signed 32-bit integer
'REAL': unpack_real, # 32-bit floating point,
'LINT': unpack_lint,
'BYTE': unpack_sint, # byte string 8-bits
'WORD': unpack_uint, # byte string 16-bits
'DWORD': unpack_dint, # byte string 32-bits
'LWORD': unpack_lint # byte string 64-bits
}
DATA_FUNCTION_SIZE = {
'BOOL': 1,
'SINT': 1, # Signed 8-bit integer
'USINT': 1, # Unisgned 8-bit integer
'INT': 2, # Signed 16-bit integer
'UINT': 2, # Unsigned 16-bit integer
'DINT': 4, # Signed 32-bit integer
'REAL': 4, # 32-bit floating point
'LINT': 8,
'BYTE': 1, # byte string 8-bits
'WORD': 2, # byte string 16-bits
'DWORD': 4, # byte string 32-bits
'LWORD': 8 # byte string 64-bits
}
UNPACK_PCCC_DATA_FUNCTION = {
'N': unpack_int,
'B': unpack_int,
'T': unpack_int,
'C': unpack_int,
'S': unpack_int,
'F': unpack_real,
'A': unpack_sint,
'R': unpack_dint,
'O': unpack_int,
'I': unpack_int
}
PACK_PCCC_DATA_FUNCTION = {
'N': pack_int,
'B': pack_int,
'T': pack_int,
'C': pack_int,
'S': pack_int,
'F': pack_real,
'A': pack_sint,
'R': pack_dint,
'O': pack_int,
'I': pack_int
}
def print_bytes_line(msg):
out = ''
for ch in msg:
out += "{:0>2x}".format(ord(ch))
return out
def print_bytes_msg(msg, info=''):
out = info
new_line = True
line = 0
column = 0
for idx, ch in enumerate(msg):
if new_line:
out += "\n({:0>4d}) ".format(line * 10)
new_line = False
out += "{:0>2x} ".format(ord(ch))
if column == 9:
new_line = True
column = 0
line += 1
else:
column += 1
return out
def get_extended_status(msg, start):
status = unpack_usint(msg[start:start+1])
# send_rr_data
# 42 General Status
# 43 Size of additional status
# 44..n additional status
# send_unit_data
# 48 General Status
# 49 Size of additional status
# 50..n additional status
extended_status_size = (unpack_usint(msg[start+1:start+2]))*2
extended_status = 0
if extended_status_size != 0:
# There is an additional status
if extended_status_size == 1:
extended_status = unpack_usint(msg[start+2:start+3])
elif extended_status_size == 2:
extended_status = unpack_uint(msg[start+2:start+4])
elif extended_status_size == 4:
extended_status = unpack_dint(msg[start+2:start+6])
else:
return 'Extended Status Size Unknown'
try:
return '{0}'.format(EXTEND_CODES[status][extended_status])
except LookupError:
return "Extended Status info not present"
def create_tag_rp(tag, multi_requests=False):
""" Create tag Request Packet
It returns the request packed wrapped around the tag passed.
If any error it returns none
"""
tags = tag.split('.')
rp = []
index = []
for tag in tags:
add_index = False
# Check if is an array tag
if tag.find('[') != -1:
# Remove the last square bracket
tag = tag[:len(tag)-1]
# Isolate the value inside bracket
inside_value = tag[tag.find('[')+1:]
# Now split the inside value in case part of multidimensional array
index = inside_value.split(',')
# Flag the existence of one o more index
add_index = True
# Get only the tag part
tag = tag[:tag.find('[')]
tag_length = len(tag)
# Create the request path
rp.append(EXTENDED_SYMBOL) # ANSI Ext. symbolic segment
rp.append(chr(tag_length)) # Length of the tag
# Add the tag to the Request path
for char in tag:
rp.append(char)
# Add pad byte because total length of Request path must be word-aligned
if tag_length % 2:
rp.append(PADDING_BYTE)
# Add any index
if add_index:
for idx in index:
val = int(idx)
if val <= 0xff:
rp.append(ELEMENT_ID["8-bit"])
rp.append(pack_usint(val))
elif val <= 0xffff:
rp.append(ELEMENT_ID["16-bit"]+PADDING_BYTE)
rp.append(pack_uint(val))
elif val <= 0xfffffffff:
rp.append(ELEMENT_ID["32-bit"]+PADDING_BYTE)
rp.append(pack_dint(val))
else:
# Cannot create a valid request packet
return None
# At this point the Request Path is completed,
if multi_requests:
request_path = chr(len(rp)/2) + ''.join(rp)
else:
request_path = ''.join(rp)
return request_path
def build_common_packet_format(message_type, message, addr_type, addr_data=None, timeout=10):
""" build_common_packet_format
It creates the common part for a CIP message. Check Volume 2 (page 2.22) of CIP specification for reference
"""
msg = pack_dint(0) # Interface Handle: shall be 0 for CIP
msg += pack_uint(timeout) # timeout
msg += pack_uint(2) # Item count: should be at list 2 (Address and Data)
msg += addr_type # Address Item Type ID
if addr_data is not None:
msg += pack_uint(len(addr_data)) # Address Item Length
msg += addr_data
else:
msg += pack_uint(0) # Address Item Length
msg += message_type # Data Type ID
msg += pack_uint(len(message)) # Data Item Length
msg += message
return msg
def build_multiple_service(rp_list, sequence=None):
mr = []
if sequence is not None:
mr.append(pack_uint(sequence))
mr.append(chr(TAG_SERVICES_REQUEST["Multiple Service Packet"])) # the Request Service
mr.append(pack_usint(2)) # the Request Path Size length in word
mr.append(CLASS_ID["8-bit"])
mr.append(CLASS_CODE["Message Router"])
mr.append(INSTANCE_ID["8-bit"])
mr.append(pack_usint(1)) # Instance 1
mr.append(pack_uint(len(rp_list))) # Number of service contained in the request
# Offset calculation
offset = (len(rp_list) * 2) + 2
for index, rp in enumerate(rp_list):
if index == 0:
mr.append(pack_uint(offset)) # Starting offset
else:
mr.append(pack_uint(offset))
offset += len(rp)
for rp in rp_list:
mr.append(rp)
return mr
def parse_multiple_request(message, tags, typ):
""" parse_multi_request
This function should be used to parse the message replayed to a multi request service rapped around the
send_unit_data message.
:param message: the full message returned from the PLC
:param tags: The list of tags to be read
:param typ: to specify if multi request service READ or WRITE
:return: a list of tuple in the format [ (tag name, value, data type), ( tag name, value, data type) ].
In case of error the tuple will be (tag name, None, None)
"""
offset = 50
position = 50
number_of_service_replies = unpack_uint(message[offset:offset+2])
tag_list = []
for index in range(number_of_service_replies):
position += 2
start = offset + unpack_uint(message[position:position+2])
general_status = unpack_usint(message[start+2:start+3])
if general_status == 0:
if typ == "READ":
data_type = unpack_uint(message[start+4:start+6])
try:
value_begin = start + 6
value_end = value_begin + DATA_FUNCTION_SIZE[I_DATA_TYPE[data_type]]
value = message[value_begin:value_end]
tag_list.append((tags[index],
UNPACK_DATA_FUNCTION[I_DATA_TYPE[data_type]](value),
I_DATA_TYPE[data_type]))
except LookupError:
tag_list.append((tags[index], None, None))
else:
tag_list.append((tags[index] + ('GOOD',)))
else:
if typ == "READ":
tag_list.append((tags[index], None, None))
else:
tag_list.append((tags[index] + ('BAD',)))
return tag_list
class Socket:
def __init__(self, timeout=5.0):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.settimeout(timeout)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
def connect(self, host, port):
try:
self.sock.connect((host, port))
except socket.timeout:
raise CommError("Socket timeout during connection.")
def send(self, msg, timeout=0):
if timeout != 0:
self.sock.settimeout(timeout)
total_sent = 0
while total_sent < len(msg):
try:
sent = self.sock.send(msg[total_sent:])
if sent == 0:
raise CommError("socket connection broken.")
total_sent += sent
except socket.error:
raise CommError("socket connection broken.")
return total_sent
def receive(self, timeout=0):
if timeout != 0:
self.sock.settimeout(timeout)
msg_len = 28
chunks = []
bytes_recd = 0
one_shot = True
while bytes_recd < msg_len:
try:
chunk = self.sock.recv(min(msg_len - bytes_recd, 2048))
if chunk == '':
raise CommError("socket connection broken.")
if one_shot:
data_size = int(struct.unpack('<H', chunk[2:4])[0]) # Length
msg_len = HEADER_SIZE + data_size
one_shot = False
chunks.append(chunk)
bytes_recd += len(chunk)
except socket.error as e:
raise CommError(e)
return ''.join(chunks)
def close(self):
self.sock.close()
def parse_symbol_type(symbol):
""" parse_symbol_type
It parse the symbol to Rockwell Spec
:param symbol: the symbol associated to a tag
:return: A tuple containing information about the tag
"""
pass
return None
class Base(object):
_sequence = 0
def __init__(self):
if Base._sequence == 0:
Base._sequence = getpid()
else:
Base._sequence = Base._get_sequence()
self.__version__ = '0.3'
self.__sock = None
self.__direct_connections = False
self._session = 0
self._connection_opened = False
self._reply = None
self._message = None
self._target_cid = None
self._target_is_connected = False
self._tag_list = []
self._buffer = {}
self._device_description = "Device Unknown"
self._last_instance = 0
self._byte_offset = 0
self._last_position = 0
self._more_packets_available = False
self._last_tag_read = ()
self._last_tag_write = ()
self._status = (0, "")
self._output_raw = False # indicating value should be output as raw (hex)
self.attribs = {'context': '_pycomm_', 'protocol version': 1, 'rpi': 5000, 'port': 0xAF12, 'timeout': 10,
'backplane': 1, 'cpu slot': 0, 'option': 0, 'cid': '\x27\x04\x19\x71', 'csn': '\x27\x04',
'vid': '\x09\x10', 'vsn': '\x09\x10\x19\x71', 'name': 'Base', 'ip address': None}
def __len__(self):
return len(self.attribs)
def __getitem__(self, key):
return self.attribs[key]
def __setitem__(self, key, value):
self.attribs[key] = value
def __delitem__(self, key):
try:
del self.attribs[key]
except LookupError:
pass
def __iter__(self):
return iter(self.attribs)
def __contains__(self, item):
return item in self.attribs
def _check_reply(self):
raise Socket.ImplementationError("The method has not been implemented")
@staticmethod
def _get_sequence():
""" Increase and return the sequence used with connected messages
:return: The New sequence
"""
if Base._sequence < 65535:
Base._sequence += 1
else:
Base._sequence = getpid() % 65535
return Base._sequence
def nop(self):
""" No replay command
A NOP provides a way for either an originator or target to determine if the TCP connection is still open.
"""
self._message = self.build_header(ENCAPSULATION_COMMAND['nop'], 0)
self._send()
def __repr__(self):
return self._device_description
def generate_cid(self):
self.attribs['cid'] = '{0}{1}{2}{3}'.format(chr(random.randint(0, 255)), chr(random.randint(0, 255))
, chr(random.randint(0, 255)), chr(random.randint(0, 255)))
def generate_vsn(self):
self.attribs['vsn'] = '{0}{1}{2}{3}'.format(chr(random.randint(0, 255)), chr(random.randint(0, 255))
, chr(random.randint(0, 255)), chr(random.randint(0, 255)))
def description(self):
return self._device_description
def list_identity(self):
""" ListIdentity command to locate and identify potential target
return true if the replay contains the device description
"""
self._message = self.build_header(ENCAPSULATION_COMMAND['list_identity'], 0)
self._send()
self._receive()
if self._check_reply():
try:
self._device_description = self._reply[63:-1]
return True
except Exception as e:
raise CommError(e)
return False
def send_rr_data(self, msg):
""" SendRRData transfer an encapsulated request/reply packet between the originator and target
:param msg: The message to be send to the target
:return: the replay received from the target
"""
self._message = self.build_header(ENCAPSULATION_COMMAND["send_rr_data"], len(msg))
self._message += msg
self._send()
self._receive()
return self._check_reply()
def send_unit_data(self, msg):
""" SendUnitData send encapsulated connected messages.
:param msg: The message to be send to the target
:return: the replay received from the target
"""
self._message = self.build_header(ENCAPSULATION_COMMAND["send_unit_data"], len(msg))
self._message += msg
self._send()
self._receive()
return self._check_reply()
def get_status(self):
""" Get the last status/error
This method can be used after any call to get any details in case of error
:return: A tuple containing (error group, error message)
"""
return self._status
def clear(self):
""" Clear the last status/error
:return: return am empty tuple
"""
self._status = (0, "")
def build_header(self, command, length):
""" Build the encapsulate message header
The header is 24 bytes fixed length, and includes the command and the length of the optional data portion.
:return: the headre
"""
try:
h = command # Command UINT
h += pack_uint(length) # Length UINT
h += pack_dint(self._session) # Session Handle UDINT
h += pack_dint(0) # Status UDINT
h += self.attribs['context'] # Sender Context 8 bytes
h += pack_dint(self.attribs['option']) # Option UDINT
return h
except Exception as e:
raise CommError(e)
def register_session(self):
""" Register a new session with the communication partner
:return: None if any error, otherwise return the session number
"""
if self._session:
return self._session
self._session = 0
self._message = self.build_header(ENCAPSULATION_COMMAND['register_session'], 4)
self._message += pack_uint(self.attribs['protocol version'])
self._message += pack_uint(0)
self._send()
self._receive()
if self._check_reply():
self._session = unpack_dint(self._reply[4:8])
logger.debug("Session ={0} has been registered.".format(print_bytes_line(self._reply[4:8])))
return self._session
self._status = 'Warning ! the session has not been registered.'
logger.warning(self._status)
return None
def forward_open(self):
""" CIP implementation of the forward open message
Refer to ODVA documentation Volume 1 3-5.5.2
:return: False if any error in the replayed message
"""
if self._session == 0:
self._status = (4, "A session need to be registered before to call forward_open.")
raise CommError("A session need to be registered before to call forward open")
forward_open_msg = [
FORWARD_OPEN,
pack_usint(2),
CLASS_ID["8-bit"],
CLASS_CODE["Connection Manager"], # Volume 1: 5-1
INSTANCE_ID["8-bit"],
CONNECTION_MANAGER_INSTANCE['Open Request'],
PRIORITY,
TIMEOUT_TICKS,
pack_dint(0),
self.attribs['cid'],
self.attribs['csn'],
self.attribs['vid'],
self.attribs['vsn'],
TIMEOUT_MULTIPLIER,
'\x00\x00\x00',
pack_dint(self.attribs['rpi'] * 1000),
pack_uint(CONNECTION_PARAMETER['Default']),
pack_dint(self.attribs['rpi'] * 1000),
pack_uint(CONNECTION_PARAMETER['Default']),
TRANSPORT_CLASS, # Transport Class
# CONNECTION_SIZE['Backplane'],
# pack_usint(self.attribs['backplane']),
# pack_usint(self.attribs['cpu slot']),
CLASS_ID["8-bit"],
CLASS_CODE["Message Router"],
INSTANCE_ID["8-bit"],
pack_usint(1)
]
if self.__direct_connections:
forward_open_msg[20:1] = [
CONNECTION_SIZE['Direct Network'],
]
else:
forward_open_msg[20:3] = [
CONNECTION_SIZE['Backplane'],
pack_usint(self.attribs['backplane']),
pack_usint(self.attribs['cpu slot'])
]
if self.send_rr_data(
build_common_packet_format(DATA_ITEM['Unconnected'], ''.join(forward_open_msg), ADDRESS_ITEM['UCMM'],)):
self._target_cid = self._reply[44:48]
self._target_is_connected = True
return True
self._status = (4, "forward_open returned False")
return False
def forward_close(self):
""" CIP implementation of the forward close message
Each connection opened with the froward open message need to be closed.
Refer to ODVA documentation Volume 1 3-5.5.3
:return: False if any error in the replayed message
"""
if self._session == 0:
self._status = (5, "A session need to be registered before to call forward_close.")
raise CommError("A session need to be registered before to call forward_close.")
forward_close_msg = [
FORWARD_CLOSE,
pack_usint(2),
CLASS_ID["8-bit"],
CLASS_CODE["Connection Manager"], # Volume 1: 5-1
INSTANCE_ID["8-bit"],
CONNECTION_MANAGER_INSTANCE['Open Request'],
PRIORITY,
TIMEOUT_TICKS,
self.attribs['csn'],
self.attribs['vid'],
self.attribs['vsn'],
# CONNECTION_SIZE['Backplane'],
# '\x00', # Reserved
# pack_usint(self.attribs['backplane']),
# pack_usint(self.attribs['cpu slot']),
CLASS_ID["8-bit"],
CLASS_CODE["Message Router"],
INSTANCE_ID["8-bit"],
pack_usint(1)
]
if self.__direct_connections:
forward_close_msg[11:2] = [
CONNECTION_SIZE['Direct Network'],
'\x00'
]
else:
forward_close_msg[11:4] = [
CONNECTION_SIZE['Backplane'],
'\x00',
pack_usint(self.attribs['backplane']),
pack_usint(self.attribs['cpu slot'])
]
if self.send_rr_data(
build_common_packet_format(DATA_ITEM['Unconnected'], ''.join(forward_close_msg), ADDRESS_ITEM['UCMM'])):
self._target_is_connected = False
return True
self._status = (5, "forward_close returned False")
logger.warning(self._status)
return False
def un_register_session(self):
""" Un-register a connection
"""
self._message = self.build_header(ENCAPSULATION_COMMAND['unregister_session'], 0)
self._send()
self._session = None
def _send(self):
"""
socket send
:return: true if no error otherwise false
"""
try:
logger.debug(print_bytes_msg(self._message, '-------------- SEND --------------'))
self.__sock.send(self._message)
except Exception as e:
# self.clean_up()
raise CommError(e)
def _receive(self):
"""
socket receive
:return: true if no error otherwise false
"""
try:
self._reply = self.__sock.receive()
logger.debug(print_bytes_msg(self._reply, '----------- RECEIVE -----------'))
except Exception as e:
# self.clean_up()
raise CommError(e)
def open(self, ip_address, direct_connection=False):
"""
socket open
:param: ip address to connect to and type of connection. By default direct connection is disabled
:return: true if no error otherwise false
"""
# set type of connection needed
self.__direct_connections = direct_connection
# handle the socket layer
if not self._connection_opened:
try:
if self.__sock is None:
self.__sock = Socket()
self.__sock.connect(ip_address, self.attribs['port'])
self._connection_opened = True
self.attribs['ip address'] = ip_address
self.generate_cid()
self.generate_vsn()
if self.register_session() is None:
self._status = (13, "Session not registered")
return False
# not sure but maybe I can remove this because is used to clean up any previous unclosed connection
self.forward_close()
return True
except Exception as e:
# self.clean_up()
raise CommError(e)
def close(self):
"""
socket close
:return: true if no error otherwise false
"""
error_string = ''
try:
if self._target_is_connected:
self.forward_close()
if self._session != 0:
self.un_register_session()
except Exception as e:
error_string += "Error on close() -> session Err: %s" % e.message
logger.warning(error_string)
# %GLA must do a cleanup __sock.close()
try:
if self.__sock:
self.__sock.close()
except Exception as e:
error_string += "; close() -> __sock.close Err: %s" % e.message
logger.warning(error_string)
self.clean_up()
if error_string:
raise CommError(error_string)
def clean_up(self):
self.__sock = None
self._target_is_connected = False
self._session = 0
self._connection_opened = False
def is_connected(self):
return self._connection_opened

BIN
pycomm/cip/cip_base.pyc Normal file

Binary file not shown.

483
pycomm/cip/cip_const.py Normal file
View File

@@ -0,0 +1,483 @@
# -*- coding: utf-8 -*-
#
# cip_const.py - A set of structures and constants used to implement the Ethernet/IP protocol
#
#
# Copyright (c) 2014 Agostino Ruscito <ruscito@gmail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
ELEMENT_ID = {
"8-bit": '\x28',
"16-bit": '\x29',
"32-bit": '\x2a'
}
CLASS_ID = {
"8-bit": '\x20',
"16-bit": '\x21',
}
INSTANCE_ID = {
"8-bit": '\x24',
"16-bit": '\x25'
}
ATTRIBUTE_ID = {
"8-bit": '\x30',
"16-bit": '\x31'
}
# Path are combined as:
# CLASS_ID + PATHS
# For example PCCC path is CLASS_ID["8-bit"]+PATH["PCCC"] -> 0x20, 0x67, 0x24, 0x01.
PATH = {
'Connection Manager': '\x06\x24\x01',
'Router': '\x02\x24\x01',
'Backplane Data Type': '\x66\x24\x01',
'PCCC': '\x67\x24\x01',
'DHCP Channel A': '\xa6\x24\x01\x01\x2c\x01',
'DHCP Channel B': '\xa6\x24\x01\x02\x2c\x01'
}
ENCAPSULATION_COMMAND = { # Volume 2: 2-3.2 Command Field UINT 2 byte
"nop": '\x00\x00',
"list_targets": '\x01\x00',
"list_services": '\x04\x00',
"list_identity": '\x63\x00',
"list_interfaces": '\x64\x00',
"register_session": '\x65\x00',
"unregister_session": '\x66\x00',
"send_rr_data": '\x6F\x00',
"send_unit_data": '\x70\x00'
}
"""
When a tag is created, an instance of the Symbol Object (Class ID 0x6B) is created
inside the controller.
When a UDT is created, an instance of the Template object (Class ID 0x6C) is
created to hold information about the structure makeup.
"""
CLASS_CODE = {
"Message Router": '\x02', # Volume 1: 5-1
"Symbol Object": '\x6b',
"Template Object": '\x6c',
"Connection Manager": '\x06' # Volume 1: 3-5
}
CONNECTION_MANAGER_INSTANCE = {
'Open Request': '\x01',
'Open Format Rejected': '\x02',
'Open Resource Rejected': '\x03',
'Open Other Rejected': '\x04',
'Close Request': '\x05',
'Close Format Request': '\x06',
'Close Other Request': '\x07',
'Connection Timeout': '\x08'
}
TAG_SERVICES_REQUEST = {
"Read Tag": 0x4c,
"Read Tag Fragmented": 0x52,
"Write Tag": 0x4d,
"Write Tag Fragmented": 0x53,
"Read Modify Write Tag": 0x4e,
"Multiple Service Packet": 0x0a,
"Get Instance Attributes List": 0x55,
"Get Attributes": 0x03,
"Read Template": 0x4c,
}
TAG_SERVICES_REPLY = {
0xcc: "Read Tag",
0xd2: "Read Tag Fragmented",
0xcd: "Write Tag",
0xd3: "Write Tag Fragmented",
0xce: "Read Modify Write Tag",
0x8a: "Multiple Service Packet",
0xd5: "Get Instance Attributes List",
0x83: "Get Attributes",
0xcc: "Read Template"
}
I_TAG_SERVICES_REPLY = {
"Read Tag": 0xcc,
"Read Tag Fragmented": 0xd2,
"Write Tag": 0xcd,
"Write Tag Fragmented": 0xd3,
"Read Modify Write Tag": 0xce,
"Multiple Service Packet": 0x8a,
"Get Instance Attributes List": 0xd5,
"Get Attributes": 0x83,
"Read Template": 0xcc
}
"""
EtherNet/IP Encapsulation Error Codes
Standard CIP Encapsulation Error returned in the cip message header
"""
STATUS = {
0x0000: "Success",
0x0001: "The sender issued an invalid or unsupported encapsulation command",
0x0002: "Insufficient memory",
0x0003: "Poorly formed or incorrect data in the data portion",
0x0064: "An originator used an invalid session handle when sending an encapsulation message to the target",
0x0065: "The target received a message of invalid length",
0x0069: "Unsupported Protocol Version"
}
"""
MSG Error Codes:
The following error codes have been taken from:
Rockwell Automation Publication
1756-RM003P-EN-P - December 2014
"""
SERVICE_STATUS = {
0x01: "Connection failure (see extended status)",
0x02: "Insufficient resource",
0x03: "Invalid value",
0x04: "IOI syntax error. A syntax error was detected decoding the Request Path (see extended status)",
0x05: "Destination unknown, class unsupported, instance \nundefined or structure element undefined (see extended status)",
0x06: "Insufficient Packet Space",
0x07: "Connection lost",
0x08: "Service not supported",
0x09: "Error in data segment or invalid attribute value",
0x0A: "Attribute list error",
0x0B: "State already exist",
0x0C: "Object state conflict",
0x0D: "Object already exist",
0x0E: "Attribute not settable",
0x0F: "Permission denied",
0x10: "Device state conflict",
0x11: "Reply data too large",
0x12: "Fragmentation of a primitive value",
0x13: "Insufficient command data",
0x14: "Attribute not supported",
0x15: "Too much data",
0x1A: "Bridge request too large",
0x1B: "Bridge response too large",
0x1C: "Attribute list shortage",
0x1D: "Invalid attribute list",
0x1E: "Request service error",
0x1F: "Connection related failure (see extended status)",
0x22: "Invalid reply received",
0x25: "Key segment error",
0x26: "Invalid IOI error",
0x27: "Unexpected attribute in list",
0x28: "DeviceNet error - invalid member ID",
0x29: "DeviceNet error - member not settable",
0xD1: "Module not in run state",
0xFB: "Message port not supported",
0xFC: "Message unsupported data type",
0xFD: "Message uninitialized",
0xFE: "Message timeout",
0xff: "General Error (see extended status)"
}
EXTEND_CODES = {
0x01: {
0x0100: "Connection in use",
0x0103: "Transport not supported",
0x0106: "Ownership conflict",
0x0107: "Connection not found",
0x0108: "Invalid connection type",
0x0109: "Invalid connection size",
0x0110: "Module not configured",
0x0111: "EPR not supported",
0x0114: "Wrong module",
0x0115: "Wrong device type",
0x0116: "Wrong revision",
0x0118: "Invalid configuration format",
0x011A: "Application out of connections",
0x0203: "Connection timeout",
0x0204: "Unconnected message timeout",
0x0205: "Unconnected send parameter error",
0x0206: "Message too large",
0x0301: "No buffer memory",
0x0302: "Bandwidth not available",
0x0303: "No screeners available",
0x0305: "Signature match",
0x0311: "Port not available",
0x0312: "Link address not available",
0x0315: "Invalid segment type",
0x0317: "Connection not scheduled"
},
0x04: {
0x0000: "Extended status out of memory",
0x0001: "Extended status out of instances"
},
0x05: {
0x0000: "Extended status out of memory",
0x0001: "Extended status out of instances"
},
0x1F: {
0x0203: "Connection timeout"
},
0xff: {
0x7: "Wrong data type",
0x2001: "Excessive IOI",
0x2002: "Bad parameter value",
0x2018: "Semaphore reject",
0x201B: "Size too small",
0x201C: "Invalid size",
0x2100: "Privilege failure",
0x2101: "Invalid keyswitch position",
0x2102: "Password invalid",
0x2103: "No password issued",
0x2104: "Address out of range",
0x2105: "Access beyond end of the object",
0x2106: "Data in use",
0x2107: "Tag type used n request dose not match the target tag's data type",
0x2108: "Controller in upload or download mode",
0x2109: "Attempt to change number of array dimensions",
0x210A: "Invalid symbol name",
0x210B: "Symbol does not exist",
0x210E: "Search failed",
0x210F: "Task cannot start",
0x2110: "Unable to write",
0x2111: "Unable to read",
0x2112: "Shared routine not editable",
0x2113: "Controller in faulted mode",
0x2114: "Run mode inhibited"
}
}
DATA_ITEM = {
'Connected': '\xb1\x00',
'Unconnected': '\xb2\x00'
}
ADDRESS_ITEM = {
'Connection Based': '\xa1\x00',
'Null': '\x00\x00',
'UCMM': '\x00\x00'
}
UCMM = {
'Interface Handle': 0,
'Item Count': 2,
'Address Type ID': 0,
'Address Length': 0,
'Data Type ID': 0x00b2
}
CONNECTION_SIZE = {
'Backplane': '\x03', # CLX
'Direct Network': '\x02'
}
HEADER_SIZE = 24
EXTENDED_SYMBOL = '\x91'
BOOL_ONE = 0xff
REQUEST_SERVICE = 0
REQUEST_PATH_SIZE = 1
REQUEST_PATH = 2
SUCCESS = 0
INSUFFICIENT_PACKETS = 6
OFFSET_MESSAGE_REQUEST = 40
FORWARD_CLOSE = '\x4e'
UNCONNECTED_SEND = '\x52'
FORWARD_OPEN = '\x54'
LARGE_FORWARD_OPEN = '\x5b'
GET_CONNECTION_DATA = '\x56'
SEARCH_CONNECTION_DATA = '\x57'
GET_CONNECTION_OWNER = '\x5a'
MR_SERVICE_SIZE = 2
PADDING_BYTE = '\x00'
PRIORITY = '\x0a'
TIMEOUT_TICKS = '\x05'
TIMEOUT_MULTIPLIER = '\x01'
TRANSPORT_CLASS = '\xa3'
CONNECTION_PARAMETER = {
'PLC5': 0x4302,
'SLC500': 0x4302,
'CNET': 0x4320,
'DHP': 0x4302,
'Default': 0x43f8,
}
"""
Atomic Data Type:
Bit = Bool
Bit array = DWORD (32-bit boolean aray)
8-bit integer = SINT
16-bit integer = UINT
32-bit integer = DINT
32-bit float = REAL
64-bit integer = LINT
From Rockwell Automation Publication 1756-PM020C-EN-P November 2012:
When reading a BOOL tag, the values returned for 0 and 1 are 0 and 0xff, respectively.
"""
S_DATA_TYPE = {
'BOOL': 0xc1,
'SINT': 0xc2, # Signed 8-bit integer
'INT': 0xc3, # Signed 16-bit integer
'DINT': 0xc4, # Signed 32-bit integer
'LINT': 0xc5, # Signed 64-bit integer
'USINT': 0xc6, # Unsigned 8-bit integer
'UINT': 0xc7, # Unsigned 16-bit integer
'UDINT': 0xc8, # Unsigned 32-bit integer
'ULINT': 0xc9, # Unsigned 64-bit integer
'REAL': 0xca, # 32-bit floating point
'LREAL': 0xcb, # 64-bit floating point
'STIME': 0xcc, # Synchronous time
'DATE': 0xcd,
'TIME_OF_DAY': 0xce,
'DATE_AND_TIME': 0xcf,
'STRING': 0xd0, # character string (1 byte per character)
'BYTE': 0xd1, # byte string 8-bits
'WORD': 0xd2, # byte string 16-bits
'DWORD': 0xd3, # byte string 32-bits
'LWORD': 0xd4, # byte string 64-bits
'STRING2': 0xd5, # character string (2 byte per character)
'FTIME': 0xd6, # Duration high resolution
'LTIME': 0xd7, # Duration long
'ITIME': 0xd8, # Duration short
'STRINGN': 0xd9, # character string (n byte per character)
'SHORT_STRING': 0xda, # character string (1 byte per character, 1 byte length indicator)
'TIME': 0xdb, # Duration in milliseconds
'EPATH': 0xdc, # CIP Path segment
'ENGUNIT': 0xdd, # Engineering Units
'STRINGI': 0xde # International character string
}
I_DATA_TYPE = {
0xc1: 'BOOL',
0xc2: 'SINT', # Signed 8-bit integer
0xc3: 'INT', # Signed 16-bit integer
0xc4: 'DINT', # Signed 32-bit integer
0xc5: 'LINT', # Signed 64-bit integer
0xc6: 'USINT', # Unsigned 8-bit integer
0xc7: 'UINT', # Unsigned 16-bit integer
0xc8: 'UDINT', # Unsigned 32-bit integer
0xc9: 'ULINT', # Unsigned 64-bit integer
0xca: 'REAL', # 32-bit floating point
0xcb: 'LREAL', # 64-bit floating point
0xcc: 'STIME', # Synchronous time
0xcd: 'DATE',
0xce: 'TIME_OF_DAY',
0xcf: 'DATE_AND_TIME',
0xd0: 'STRING', # character string (1 byte per character)
0xd1: 'BYTE', # byte string 8-bits
0xd2: 'WORD', # byte string 16-bits
0xd3: 'DWORD', # byte string 32-bits
0xd4: 'LWORD', # byte string 64-bits
0xd5: 'STRING2', # character string (2 byte per character)
0xd6: 'FTIME', # Duration high resolution
0xd7: 'LTIME', # Duration long
0xd8: 'ITIME', # Duration short
0xd9: 'STRINGN', # character string (n byte per character)
0xda: 'SHORT_STRING', # character string (1 byte per character, 1 byte length indicator)
0xdb: 'TIME', # Duration in milliseconds
0xdc: 'EPATH', # CIP Path segment
0xdd: 'ENGUNIT', # Engineering Units
0xde: 'STRINGI' # International character string
}
REPLAY_INFO = {
0x4e: 'FORWARD_CLOSE (4E,00)',
0x52: 'UNCONNECTED_SEND (52,00)',
0x54: 'FORWARD_OPEN (54,00)',
0x6f: 'send_rr_data (6F,00)',
0x70: 'send_unit_data (70,00)',
0x00: 'nop',
0x01: 'list_targets',
0x04: 'list_services',
0x63: 'list_identity',
0x64: 'list_interfaces',
0x65: 'register_session',
0x66: 'unregister_session',
}
PCCC_DATA_TYPE = {
'N': '\x89',
'B': '\x85',
'T': '\x86',
'C': '\x87',
'S': '\x84',
'F': '\x8a',
'ST': '\x8d',
'A': '\x8e',
'R': '\x88',
'O': '\x8b',
'I': '\x8c'
}
PCCC_DATA_SIZE = {
'N': 2,
# 'L': 4,
'B': 2,
'T': 6,
'C': 6,
'S': 2,
'F': 4,
'ST': 84,
'A': 2,
'R': 6,
'O': 2,
'I': 2
}
PCCC_CT = {
'PRE': 1,
'ACC': 2,
'EN': 15,
'TT': 14,
'DN': 13,
'CU': 15,
'CD': 14,
'OV': 12,
'UN': 11,
'UA': 10
}
PCCC_ERROR_CODE = {
-2: "Not Acknowledged (NAK)",
-3: "No Reponse, Check COM Settings",
-4: "Unknown Message from DataLink Layer",
-5: "Invalid Address",
-6: "Could Not Open Com Port",
-7: "No data specified to data link layer",
-8: "No data returned from PLC",
-20: "No Data Returned",
16: "Illegal Command or Format, Address may not exist or not enough elements in data file",
32: "PLC Has a Problem and Will Not Communicate",
48: "Remote Node Host is Missing, Disconnected, or Shut Down",
64: "Host Could Not Complete Function Due To Hardware Fault",
80: "Addressing problem or Memory Protect Rungs",
96: "Function not allows due to command protection selection",
112: "Processor is in Program mode",
128: "Compatibility mode file missing or communication zone problem",
144: "Remote node cannot buffer command",
240: "Error code in EXT STS Byte"
}

BIN
pycomm/cip/cip_const.pyc Normal file

Binary file not shown.

7
pycomm/common.py Normal file
View File

@@ -0,0 +1,7 @@
__author__ = 'Agostino Ruscito'
__version__ = "1.0.8"
__date__ = "08 03 2015"
class PycommError(Exception):
pass

BIN
pycomm/common.pyc Normal file

Binary file not shown.

20
root-CA.crt Normal file
View File

@@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsF
ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6
b24gUm9vdCBDQSAxMB4XDTE1MDUyNjAwMDAwMFoXDTM4MDExNzAwMDAwMFowOTEL
MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv
b3QgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALJ4gHHKeNXj
ca9HgFB0fW7Y14h29Jlo91ghYPl0hAEvrAIthtOgQ3pOsqTQNroBvo3bSMgHFzZM
9O6II8c+6zf1tRn4SWiw3te5djgdYZ6k/oI2peVKVuRF4fn9tBb6dNqcmzU5L/qw
IFAGbHrQgLKm+a/sRxmPUDgH3KKHOVj4utWp+UhnMJbulHheb4mjUcAwhmahRWa6
VOujw5H5SNz/0egwLX0tdHA114gk957EWW67c4cX8jJGKLhD+rcdqsq08p8kDi1L
93FcXmn/6pUCyziKrlA4b9v7LWIbxcceVOF34GfID5yHI9Y/QCB/IIDEgEw+OyQm
jgSubJrIqg0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC
AYYwHQYDVR0OBBYEFIQYzIU07LwMlJQuCFmcx7IQTgoIMA0GCSqGSIb3DQEBCwUA
A4IBAQCY8jdaQZChGsV2USggNiMOruYou6r4lK5IpDB/G/wkjUu0yKGX9rbxenDI
U5PMCCjjmCXPI6T53iHTfIUJrU6adTrCC2qJeHZERxhlbI1Bjjt/msv0tadQ1wUs
N+gDS63pYaACbvXy8MWy7Vu33PqUXHeeE6V/Uq2V8viTO96LXFvKWlJbYK8U90vv
o/ufQJVtMVT8QtPHRh8jrdkPSHCa2XV4cdFyQzR1bldZwgJcJmApzyMZFo6IQ6XU
5MsI+yMRQ+hDKXJioaldXgjUkK642M4UwtBV8ob2xJNDd2ZhwLnoQdeXeGADbkpy
rqXRfboQnoZsG4q5WTP468SQvvG5
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More